1
0
mirror of https://github.com/fafhrd91/actix-web synced 2025-07-04 01:51:30 +02:00

Compare commits

...

105 Commits

Author SHA1 Message Date
8d4cb8c69a prepare awc release 3.1.0 2023-01-21 18:54:58 +00:00
dd9ac4d9b8 prepare actix-http release 3.3.0 2023-01-21 18:52:57 +00:00
72c80f9107 update tokio-uring support to 0.4 2023-01-21 18:46:44 +00:00
b00fe72cf6 Update base64 to 0.21 (#2966)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2023-01-21 01:36:08 +00:00
2f0b8a264a fix non-empty body of http2 HEAD response (#2920)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2023-01-21 00:51:49 +00:00
b9f0faafde add cache-status and cdn-cache-control header names (#2968)
* add cache-status and cdn-cache-control header names

* fix changelog

* update docs with rfc numbers
2023-01-21 00:02:54 +00:00
6627109984 Add fallible versions of test_utils helpers to actix-test (#2961)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2023-01-11 11:43:51 +00:00
b9f54c8796 use secure tokio version range
see RUSTSEC-2023-0001

fixes #2962
2023-01-10 08:58:38 +00:00
cfd40b4f15 Implement MessageBody for Cow<'static, {[u8], str}> (#2959) 2023-01-06 21:56:16 +00:00
08c2cdf641 http service finalizer for automatic h2c detection (#2957)
* http service finalizer for automatic h2c detection

* update changelog

* add h2c auto test
2023-01-03 14:43:02 +00:00
fbd0e5dd0a add headermap::retain (#2955)
* add headermap::retain

* update changelog and docs

* fix retain doc test
2023-01-02 13:38:07 +00:00
7b936bc443 add some useful header name constants (#2956) 2023-01-02 13:33:31 +00:00
d2364c80c4 improve error handling on new new example 2023-01-02 00:16:59 +00:00
77459ec415 add h2c example 2023-01-02 00:14:25 +00:00
6f0a6bd1bb address clippy lints
For intrepid commit message readers:
The choice to add allows for the inlined format args lint instead of actually
inlining them is not very clear because our actual real world MSRV is not clear.
We currently claim 1.60 is our MSRV but this is mainly due to dependencies. I'm
fairly sure that we could support < 1.58 if those deps are outdated in a users
lockfile. We'll remove these allows again at some point soon.
2023-01-01 20:56:34 +00:00
06c3513bc0 add Allow header to resource's default 405 handler (#2949) 2022-12-21 20:28:45 +00:00
29bd6a1dd5 fix version requirement for futures_util 2022-12-18 01:34:48 +00:00
17f7cd2aae bump zstd to 0.12 2022-12-18 01:31:06 +00:00
ede645ee4e bump criterion to 0.4 2022-12-18 01:11:04 +00:00
6d48593a60 fix doc tests 2022-11-25 23:28:31 +00:00
3c69d078b2 add redirect service (#1961) 2022-11-25 21:44:52 +00:00
e7c34f2e45 tweak form docs 2022-11-25 21:38:57 +00:00
d708a4de6d add acceptable guard (#2265) 2022-11-25 21:04:24 +00:00
d97bd7ec17 fix msrv CI 2022-11-25 17:37:23 +00:00
fcd06c9896 workaround zstd msrv issue 2022-11-25 17:28:06 +00:00
1065043528 ci: use dtolnay's rust-toolchain action 2022-11-25 17:00:59 +00:00
45b77c6819 GitHub Workflows security hardening (#2923) 2022-11-04 00:42:22 +00:00
a2e2c30d59 use tokio-util deps directly where possible 2022-10-30 19:47:49 +00:00
83cd061c86 remove fakeshadow from author lists (#2921) 2022-10-25 16:37:04 +01:00
068909f1b3 Replace deprecated twoway with memchr (#2909) 2022-10-14 11:52:13 +00:00
f8cb71e789 remove incomplete doc comment 2022-10-14 13:20:38 +02:00
73b94e902d fix xhtml pages' content-disposition (#2903)
Co-authored-by: Yuki Okushi <jtitor@2k36.org>
2022-10-09 12:44:10 +01:00
ad7e67f940 add middleware::logger::custom_response_replace (#2631)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-09-26 18:44:51 +00:00
1519ae7772 clarify tokio::main docs 2022-09-26 12:29:57 +01:00
cc7145d41d rust 1.64 clippy run (#2891) 2022-09-25 20:54:17 +01:00
172c4c7a0a use noop hasher in extensions (#2890) 2022-09-25 15:32:26 +01:00
fd63305859 Fix actix-multipart field content_type() to return an Option (#2885)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-09-23 17:06:40 +00:00
ef64d6a27c update derive_more dependency to 0.99.8 (#2888) 2022-09-23 12:39:18 +00:00
4d3689db5e Remove unnecesary clones in extractor configs (#2884)
Co-authored-by: erhodes <erik@space-nav.com>
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-09-20 23:17:58 +00:00
894effb856 prepare actix-router release 0.5.1 2022-09-19 18:52:16 +01:00
07a7290432 Fix typo in error string for i32 parse in router deserialization (#2876)
* fix typo in error string for i32 parse

* update actix-router changelog for #2876

* Update CHANGES.md

Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-09-19 18:44:52 +01:00
bd5c0af0a6 Add ability to set default error handlers to the ErrorHandler middleware (#2784)
Co-authored-by: erhodes <erik@space-nav.com>
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-09-15 13:06:34 +00:00
c73fba16ce implement MessageBody for mut B (#2868) 2022-09-14 11:23:22 +01:00
909461087c add ContentDisposition::attachment constructor (#2867) 2022-09-13 01:19:25 +01:00
40f7ab38d2 prepare actix-web release 4.2.1 2022-09-12 10:43:03 +01:00
a9e44bcf07 fix -http version to 3.2.2 (#2871)
fixes #2869
2022-09-12 10:42:22 +01:00
7767cf3071 prepare actix-web release 4.2.0 2022-09-11 16:44:46 +01:00
b59a96d9d7 prepare actix-web-codegen release 4.1.0 2022-09-11 16:42:28 +01:00
037740bf62 prepare actix-http release 3.2.2 2022-09-11 16:41:29 +01:00
386258c285 clarify worker_max_blocking_threads default 2022-09-06 10:13:10 +01:00
99bf774e94 update gh-pages deploy action 2022-09-03 22:15:59 +01:00
35b0fd1a85 specify branch in doc job 2022-09-03 22:05:28 +01:00
0b5b4dcbf3 reduce size of docs branch 2022-09-03 21:56:37 +01:00
c993055fc8 replace askama_escape in favor of v_htmlescape (#2824) 2022-08-30 09:34:46 +01:00
679f61cf37 bump msrv to 1.59 2022-08-27 13:14:16 +01:00
056de320f0 fix scope doc example
fixes #2843
2022-08-25 03:17:48 +01:00
f220719fae prepare awc release 3.0.1 2022-08-25 03:13:31 +01:00
c9f91796df awc: correctly handle redirections that begins with // (#2840) 2022-08-25 03:12:58 +01:00
ea764b1d57 add feature annotations to docs 2022-07-31 23:40:09 +01:00
19aa14a9d6 re-order HttpServer methods for better docs 2022-07-31 22:10:51 +01:00
10746fb2fb improve HttpServer docs 2022-07-31 21:58:15 +01:00
4bbe60b609 document h2 ping-pong 2022-07-24 16:42:35 +01:00
8ff489aa90 apply fix from #2369 2022-07-24 16:35:00 +01:00
e0a88cea8d remove unwindsafe assertions 2022-07-24 02:47:12 +01:00
d78ff283af prepare actix-test release 0.1.0 2022-07-24 02:13:46 +01:00
ce6d520215 prepare actix-http-test release 3.0.0 2022-07-24 02:11:21 +01:00
3e25742a41 prepare actix-files release 0.6.2 2022-07-23 16:37:59 +01:00
20f4cfe6b5 fix partial ranges for video content (#2817)
fixes #2815
2022-07-23 16:27:01 +01:00
6408291ab0 appease clippy by deriving Eq on a bunch of items (#2818) 2022-07-23 16:26:48 +01:00
8d260e599f clippy 2022-07-23 02:48:28 +01:00
14bcf72ec1 web utilizes const header names 2022-07-22 20:21:58 +01:00
6485434a33 update bump script 2022-07-22 20:19:15 +01:00
16c7c16463 reduce scope of once_cell change 2022-07-22 20:19:02 +01:00
9b0fdca6e9 Remove some unnecessary uses of once_cell::sync::Lazy (#2816) 2022-07-22 20:18:38 +01:00
8759d79b03 routes macro allowing multiple paths per handler (#2718)
* WIP: basic implementation for `routes` macro

* chore: changelog, docs, tests

* error on missing methods

* Apply suggestions from code review

Co-authored-by: Igor Aleksanov <popzxc@yandex.ru>

* update test stderr expectation

* add additional tests

* fix stderr output

* remove useless ResourceType

this is dead code from back when .to and .to_async were different ways to add a service

Co-authored-by: Igor Aleksanov <popzxc@yandex.ru>
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-07-04 04:31:49 +00:00
c0d5d7bdb5 add octal-ish CL test 2022-07-02 21:04:37 +01:00
40eab1f091 simplify simple decoder tests 2022-07-02 20:07:27 +01:00
75517cce82 install cargo hack in CI faster 2022-07-02 20:00:59 +01:00
9b51624b27 update cargo-cache to 0.8.2 2022-07-02 18:43:19 +01:00
8e2ae8cd40 install nextest faster 2022-07-02 18:38:08 +01:00
9a2f8450e0 install older cargo-edit 2022-07-02 17:40:03 +01:00
23ef51609e s/cargo-add/cargo-edit 2022-07-02 17:29:06 +01:00
f7d629a61a fix cargo-add in CI 2022-07-02 17:20:46 +01:00
e0845d9ad9 add msrv workarounds to ci 2022-07-02 17:12:24 +01:00
2f79daec16 only run tests on stable 2022-07-02 17:05:48 +01:00
f3f41a0cc7 prepare actix-http release 3.2.1 2022-07-02 16:50:54 +01:00
987067698b use sparse registry in CI 2022-07-01 12:45:26 +01:00
b62f1b4ef7 migrate deprecated method in docs 2022-07-01 12:40:00 +01:00
df5257c373 update trust dns resolver 2022-07-01 10:21:46 +01:00
226ea696ce update dev deps 2022-07-01 10:19:28 +01:00
e524fc86ea add HTTP/0.9 rejection test 2022-07-01 09:03:57 +01:00
7e990e423f add http/1.0 GET parsing tests 2022-07-01 08:24:45 +01:00
8f9a12ed5d fix parsing ambiguities for HTTP/1.0 requests (#2794)
* fix HRS vuln when first CL header is 0

* ignore TE headers in http/1.0 reqs

* update changelog

* disallow HTTP/1.0 requests without a CL header

* fix test

* broken fix for http1.0 post requests
2022-07-01 08:23:40 +01:00
c6eba2da9b prepare actix-http release 3.2.0 (#2801) 2022-07-01 06:16:17 +01:00
06c7945801 retain previously set vary headers when using compress (#2798)
* retain previously set vary headers when using compress
2022-06-30 09:19:16 +01:00
0dba6310c6 Expose option for setting TLS handshake timeout (#2752)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-06-27 02:57:21 +00:00
f7d7d92984 address clippy lints 2022-06-27 03:12:36 +01:00
3d6ea7fe9b Improve documentation for actix-web-actors (#2788) 2022-06-26 16:45:02 +00:00
8dbf7da89f Fix common grammar mistakes and add small documentation for AppConfig's Default implementation (#2793) 2022-06-25 14:01:06 +00:00
de92b3be2e fix unrecoverable Err(Overflow) in websocket frame parser (#2790) 2022-06-24 03:46:17 +00:00
5d0e8138ee Add getters for &ServiceRequest (#2786) 2022-06-22 21:02:03 +01:00
6b7196225e Bump up MSRV to 1.57 (#2789) 2022-06-22 12:08:06 +01:00
265fa0d050 Add link to MongoDB example in README (#2783) 2022-06-15 22:38:10 +01:00
062127a210 Revert "actix-http: Pull actix-web dev-dep from Git repo"
This reverts commit 3926416580.
2022-06-12 00:55:06 +09:00
3926416580 actix-http: Pull actix-web dev-dep from Git repo
The published version of actix-web depends on a buggy version of zstd crate,
temporarily use actix-web on git repo to avoid the build failure.

Signed-off-by: Yuki Okushi <jtitor@2k36.org>
2022-06-12 00:48:08 +09:00
168 changed files with 3700 additions and 1172 deletions

View File

@ -5,6 +5,9 @@ on:
branches: branches:
- master - master
permissions:
contents: read # to fetch code (actions/checkout)
jobs: jobs:
check_benchmark: check_benchmark:
runs-on: ubuntu-latest runs-on: ubuntu-latest

View File

@ -4,6 +4,9 @@ on:
push: push:
branches: [master] branches: [master]
permissions:
contents: read # to fetch code (actions/checkout)
jobs: jobs:
build_and_test_nightly: build_and_test_nightly:
strategy: strategy:
@ -23,6 +26,7 @@ jobs:
CI: 1 CI: 1
CARGO_INCREMENTAL: 0 CARGO_INCREMENTAL: 0
VCPKGRS_DYNAMIC: 1 VCPKGRS_DYNAMIC: 1
CARGO_UNSTABLE_SPARSE_REGISTRY: true
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
@ -44,18 +48,15 @@ jobs:
profile: minimal profile: minimal
override: true override: true
- name: Install cargo-hack
uses: taiki-e/install-action@cargo-hack
- name: Generate Cargo.lock - name: Generate Cargo.lock
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@v1
with: { command: generate-lockfile } with: { command: generate-lockfile }
- name: Cache Dependencies - name: Cache Dependencies
uses: Swatinem/rust-cache@v1.2.0 uses: Swatinem/rust-cache@v1.2.0
- name: Install cargo-hack
uses: actions-rs/cargo@v1
with:
command: install
args: cargo-hack
- name: check minimal - name: check minimal
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@v1
with: { command: ci-check-min } with: { command: ci-check-min }
@ -80,69 +81,56 @@ jobs:
- name: Clear the cargo caches - name: Clear the cargo caches
run: | run: |
cargo install cargo-cache --version 0.6.3 --no-default-features --features ci-autoclean cargo install cargo-cache --version 0.8.2 --no-default-features --features ci-autoclean
cargo-cache cargo-cache
ci_feature_powerset_check: ci_feature_powerset_check:
name: Verify Feature Combinations name: Verify Feature Combinations
runs-on: ubuntu-latest runs-on: ubuntu-latest
env:
CI: 1
CARGO_INCREMENTAL: 0
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Install stable - uses: dtolnay/rust-toolchain@stable
uses: actions-rs/toolchain@v1
with: - name: Install cargo-hack
toolchain: stable-x86_64-unknown-linux-gnu uses: taiki-e/install-action@cargo-hack
profile: minimal
override: true
- name: Generate Cargo.lock - name: Generate Cargo.lock
uses: actions-rs/cargo@v1 run: cargo generate-lockfile
with: { command: generate-lockfile }
- name: Cache Dependencies - name: Cache Dependencies
uses: Swatinem/rust-cache@v1.2.0 uses: Swatinem/rust-cache@v1.2.0
- name: Install cargo-hack - name: check feature combinations
uses: actions-rs/cargo@v1 run: cargo ci-check-all-feature-powerset
with:
command: install
args: cargo-hack
- name: check feature combinations - name: check feature combinations
uses: actions-rs/cargo@v1 run: cargo ci-check-all-feature-powerset-linux
with: { command: ci-check-all-feature-powerset }
- name: check feature combinations
uses: actions-rs/cargo@v1
with: { command: ci-check-all-feature-powerset-linux }
nextest: nextest:
name: nextest name: nextest
runs-on: ubuntu-latest runs-on: ubuntu-latest
env:
CI: 1
CARGO_INCREMENTAL: 0
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Install Rust - uses: dtolnay/rust-toolchain@stable
uses: actions-rs/toolchain@v1
with: - name: Install nextest
toolchain: stable uses: taiki-e/install-action@nextest
profile: minimal
override: true
- name: Generate Cargo.lock - name: Generate Cargo.lock
uses: actions-rs/cargo@v1 run: cargo generate-lockfile
with: { command: generate-lockfile }
- name: Cache Dependencies - name: Cache Dependencies
uses: Swatinem/rust-cache@v1.3.0 uses: Swatinem/rust-cache@v1.3.0
- name: Install cargo-nextest
uses: actions-rs/cargo@v1
with:
command: install
args: cargo-nextest
- name: Test with cargo-nextest - name: Test with cargo-nextest
uses: actions-rs/cargo@v1 run: cargo nextest run
with:
command: nextest
args: run

View File

@ -6,6 +6,9 @@ on:
push: push:
branches: [master] branches: [master]
permissions:
contents: read # to fetch code (actions/checkout)
jobs: jobs:
build_and_test: build_and_test:
strategy: strategy:
@ -16,7 +19,7 @@ jobs:
- { name: macOS, os: macos-latest, triple: x86_64-apple-darwin } - { name: macOS, os: macos-latest, triple: x86_64-apple-darwin }
- { name: Windows, os: windows-2022, triple: x86_64-pc-windows-msvc } - { name: Windows, os: windows-2022, triple: x86_64-pc-windows-msvc }
version: version:
- 1.56.0 # MSRV - 1.59.0 # MSRV
- stable - stable
name: ${{ matrix.target.name }} / ${{ matrix.version }} name: ${{ matrix.target.name }} / ${{ matrix.version }}
@ -47,17 +50,26 @@ jobs:
profile: minimal profile: minimal
override: true override: true
- name: Install cargo-hack
uses: taiki-e/install-action@cargo-hack
- name: workaround MSRV issues
if: matrix.version != 'stable'
run: |
cargo install cargo-edit --version=0.8.0
cargo add const-str@0.3 --dev -p=actix-web
cargo add const-str@0.3 --dev -p=awc
- name: Generate Cargo.lock - name: Generate Cargo.lock
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@v1
with: { command: generate-lockfile } with: { command: generate-lockfile }
- name: Cache Dependencies - name: Cache Dependencies
uses: Swatinem/rust-cache@v1.2.0 uses: Swatinem/rust-cache@v1.2.0
- name: Install cargo-hack - name: workaround MSRV issues
uses: actions-rs/cargo@v1 if: matrix.version != 'stable'
with: run: |
command: install cargo update -p=zstd-sys --precise=2.0.1+zstd.1.5.2
args: cargo-hack
- name: check minimal - name: check minimal
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@v1
@ -83,7 +95,7 @@ jobs:
- name: Clear the cargo caches - name: Clear the cargo caches
run: | run: |
cargo install cargo-cache --version 0.6.3 --no-default-features --features ci-autoclean cargo install cargo-cache --version 0.8.2 --no-default-features --features ci-autoclean
cargo-cache cargo-cache
io-uring: io-uring:
@ -92,16 +104,10 @@ jobs:
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Install Rust - uses: dtolnay/rust-toolchain@stable
uses: actions-rs/toolchain@v1
with:
toolchain: stable-x86_64-unknown-linux-gnu
profile: minimal
override: true
- name: Generate Cargo.lock - name: Generate Cargo.lock
uses: actions-rs/cargo@v1 run: cargo generate-lockfile
with: { command: generate-lockfile }
- name: Cache Dependencies - name: Cache Dependencies
uses: Swatinem/rust-cache@v1.3.0 uses: Swatinem/rust-cache@v1.3.0
@ -119,20 +125,13 @@ jobs:
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Install Rust (nightly) - uses: dtolnay/rust-toolchain@nightly
uses: actions-rs/toolchain@v1
with:
toolchain: nightly-x86_64-unknown-linux-gnu
profile: minimal
override: true
- name: Generate Cargo.lock - name: Generate Cargo.lock
uses: actions-rs/cargo@v1 run: cargo generate-lockfile
with: { command: generate-lockfile }
- name: Cache Dependencies - name: Cache Dependencies
uses: Swatinem/rust-cache@v1.3.0 uses: Swatinem/rust-cache@v1.3.0
- name: doc tests - name: doc tests
uses: actions-rs/cargo@v1 run: cargo ci-doctest
timeout-minutes: 60 timeout-minutes: 60
with: { command: ci-doctest }

View File

@ -9,54 +9,37 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- uses: dtolnay/rust-toolchain@nightly
- name: Install Rust with: { components: rustfmt }
uses: actions-rs/toolchain@v1 - run: cargo fmt --all -- --check
with:
toolchain: stable
profile: minimal
components: rustfmt
- name: Check with rustfmt
uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check
clippy: clippy:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Install Rust - uses: dtolnay/rust-toolchain@stable
uses: actions-rs/toolchain@v1 with: { components: clippy }
with:
toolchain: stable
profile: minimal
components: clippy
override: true
- name: Generate Cargo.lock - name: Generate Cargo.lock
uses: actions-rs/cargo@v1 run: cargo generate-lockfile
with: { command: generate-lockfile }
- name: Cache Dependencies - name: Cache Dependencies
uses: Swatinem/rust-cache@v1.2.0 uses: Swatinem/rust-cache@v1.2.0
- name: Check with Clippy - name: Check with Clippy
uses: actions-rs/clippy-check@v1 uses: actions-rs/clippy-check@v1
with: with:
token: ${{ secrets.GITHUB_TOKEN }}
args: --workspace --tests --examples --all-features args: --workspace --tests --examples --all-features
token: ${{ secrets.GITHUB_TOKEN }}
lint-docs: lint-docs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1 - uses: dtolnay/rust-toolchain@stable
with: with: { components: rust-docs }
toolchain: stable
profile: minimal
components: rust-docs
- name: Check for broken intra-doc links - name: Check for broken intra-doc links
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@v1
env: env:

View File

@ -4,32 +4,29 @@ on:
push: push:
branches: [master] branches: [master]
permissions: {}
jobs: jobs:
build: build:
permissions:
contents: write # to push changes in repo (jamesives/github-pages-deploy-action)
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Install Rust - uses: dtolnay/rust-toolchain@nightly
uses: actions-rs/toolchain@v1
with:
toolchain: nightly-x86_64-unknown-linux-gnu
profile: minimal
override: true
- name: Build Docs - name: Build Docs
uses: actions-rs/cargo@v1 run: cargo +nightly doc --no-deps --workspace --all-features
with: env:
command: doc RUSTDOCFLAGS: --cfg=docsrs
args: --workspace --all-features --no-deps
- name: Tweak HTML - name: Tweak HTML
run: echo '<meta http-equiv="refresh" content="0;url=actix_web/index.html">' > target/doc/index.html run: echo '<meta http-equiv="refresh" content="0;url=actix_web/index.html">' > target/doc/index.html
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
uses: JamesIves/github-pages-deploy-action@3.7.1 uses: JamesIves/github-pages-deploy-action@v4.4.1
with: with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} folder: target/doc
BRANCH: gh-pages single-commit: true
FOLDER: target/doc

View File

@ -1,6 +1,17 @@
# Changes # Changes
## Unreleased - 2021-xx-xx ## Unreleased - 2022-xx-xx
- XHTML files now use `Content-Disposition: inline` instead of `attachment`. [#2903]
- Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
- Update `tokio-uring` dependency to `0.4`.
[#2903]: https://github.com/actix/actix-web/pull/2903
## 0.6.2 - 2022-07-23
- Allow partial range responses for video content to start streaming sooner. [#2817]
- Minimum supported Rust version (MSRV) is now 1.57 due to transitive `time` dependency.
[#2817]: https://github.com/actix/actix-web/pull/2817
## 0.6.1 - 2022-06-11 ## 0.6.1 - 2022-06-11

View File

@ -1,9 +1,8 @@
[package] [package]
name = "actix-files" name = "actix-files"
version = "0.6.1" version = "0.6.2"
authors = [ authors = [
"Nikolay Kim <fafhrd91@gmail.com>", "Nikolay Kim <fafhrd91@gmail.com>",
"fakeshadow <24548779@qq.com>",
"Rob Ede <robjtede@icloud.com>", "Rob Ede <robjtede@icloud.com>",
] ]
description = "Static file serving for Actix Web" description = "Static file serving for Actix Web"
@ -27,25 +26,25 @@ actix-service = "2"
actix-utils = "3" actix-utils = "3"
actix-web = { version = "4", default-features = false } actix-web = { version = "4", default-features = false }
askama_escape = "0.10"
bitflags = "1" bitflags = "1"
bytes = "1" bytes = "1"
derive_more = "0.99.5" derive_more = "0.99.5"
futures-core = { version = "0.3.7", default-features = false, features = ["alloc"] } futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] }
http-range = "0.1.4" http-range = "0.1.4"
log = "0.4" log = "0.4"
mime = "0.3" mime = "0.3"
mime_guess = "2.0.1" mime_guess = "2.0.1"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-project-lite = "0.2.7" pin-project-lite = "0.2.7"
v_htmlescape= "0.15"
# experimental-io-uring # experimental-io-uring
[target.'cfg(target_os = "linux")'.dependencies] [target.'cfg(target_os = "linux")'.dependencies]
tokio-uring = { version = "0.3", optional = true, features = ["bytes"] } tokio-uring = { version = "0.4", optional = true, features = ["bytes"] }
actix-server = { version = "2.1", optional = true } # ensure matching tokio-uring versions actix-server = { version = "2.2", optional = true } # ensure matching tokio-uring versions
[dev-dependencies] [dev-dependencies]
actix-rt = "2.7" actix-rt = "2.7"
actix-test = "0.1.0-beta.13" actix-test = "0.1"
actix-web = "4" actix-web = "4"
tempfile = "3.2" tempfile = "3.2"

View File

@ -3,11 +3,11 @@
> Static file serving for Actix Web > Static file serving for Actix Web
[![crates.io](https://img.shields.io/crates/v/actix-files?label=latest)](https://crates.io/crates/actix-files) [![crates.io](https://img.shields.io/crates/v/actix-files?label=latest)](https://crates.io/crates/actix-files)
[![Documentation](https://docs.rs/actix-files/badge.svg?version=0.6.1)](https://docs.rs/actix-files/0.6.1) [![Documentation](https://docs.rs/actix-files/badge.svg?version=0.6.2)](https://docs.rs/actix-files/0.6.2)
![Version](https://img.shields.io/badge/rustc-1.56+-ab6000.svg) ![Version](https://img.shields.io/badge/rustc-1.59+-ab6000.svg)
![License](https://img.shields.io/crates/l/actix-files.svg) ![License](https://img.shields.io/crates/l/actix-files.svg)
<br /> <br />
[![dependency status](https://deps.rs/crate/actix-files/0.6.1/status.svg)](https://deps.rs/crate/actix-files/0.6.1) [![dependency status](https://deps.rs/crate/actix-files/0.6.2/status.svg)](https://deps.rs/crate/actix-files/0.6.2)
[![Download](https://img.shields.io/crates/d/actix-files.svg)](https://crates.io/crates/actix-files) [![Download](https://img.shields.io/crates/d/actix-files.svg)](https://crates.io/crates/actix-files)
[![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x) [![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x)

View File

@ -1,8 +1,8 @@
use std::{fmt::Write, fs::DirEntry, io, path::Path, path::PathBuf}; use std::{fmt::Write, fs::DirEntry, io, path::Path, path::PathBuf};
use actix_web::{dev::ServiceResponse, HttpRequest, HttpResponse}; use actix_web::{dev::ServiceResponse, HttpRequest, HttpResponse};
use askama_escape::{escape as escape_html_entity, Html};
use percent_encoding::{utf8_percent_encode, CONTROLS}; use percent_encoding::{utf8_percent_encode, CONTROLS};
use v_htmlescape::escape as escape_html_entity;
/// A directory; responds with the generated directory listing. /// A directory; responds with the generated directory listing.
#[derive(Debug)] #[derive(Debug)]
@ -59,7 +59,7 @@ macro_rules! encode_file_url {
/// ``` /// ```
macro_rules! encode_file_name { macro_rules! encode_file_name {
($entry:ident) => { ($entry:ident) => {
escape_html_entity(&$entry.file_name().to_string_lossy(), Html) escape_html_entity(&$entry.file_name().to_string_lossy())
}; };
} }

View File

@ -2,7 +2,7 @@ use actix_web::{http::StatusCode, ResponseError};
use derive_more::Display; use derive_more::Display;
/// Errors which can occur when serving static files. /// Errors which can occur when serving static files.
#[derive(Display, Debug, PartialEq)] #[derive(Debug, PartialEq, Eq, Display)]
pub enum FilesError { pub enum FilesError {
/// Path is not a directory /// Path is not a directory
#[allow(dead_code)] #[allow(dead_code)]
@ -22,7 +22,7 @@ impl ResponseError for FilesError {
} }
#[allow(clippy::enum_variant_names)] #[allow(clippy::enum_variant_names)]
#[derive(Display, Debug, PartialEq)] #[derive(Debug, PartialEq, Eq, Display)]
#[non_exhaustive] #[non_exhaustive]
pub enum UriSegmentError { pub enum UriSegmentError {
/// The segment started with the wrapped invalid character. /// The segment started with the wrapped invalid character.

View File

@ -13,6 +13,7 @@
#![deny(rust_2018_idioms, nonstandard_style)] #![deny(rust_2018_idioms, nonstandard_style)]
#![warn(future_incompatible, missing_docs, missing_debug_implementations)] #![warn(future_incompatible, missing_docs, missing_debug_implementations)]
#![allow(clippy::uninlined_format_args)]
use actix_service::boxed::{BoxService, BoxServiceFactory}; use actix_service::boxed::{BoxService, BoxServiceFactory};
use actix_web::{ use actix_web::{

View File

@ -132,7 +132,7 @@ impl NamedFile {
mime::IMAGE | mime::TEXT | mime::AUDIO | mime::VIDEO => DispositionType::Inline, mime::IMAGE | mime::TEXT | mime::AUDIO | mime::VIDEO => DispositionType::Inline,
mime::APPLICATION => match ct.subtype() { mime::APPLICATION => match ct.subtype() {
mime::JAVASCRIPT | mime::JSON => DispositionType::Inline, mime::JAVASCRIPT | mime::JSON => DispositionType::Inline,
name if name == "wasm" => DispositionType::Inline, name if name == "wasm" || name == "xhtml" => DispositionType::Inline,
_ => DispositionType::Attachment, _ => DispositionType::Attachment,
}, },
_ => DispositionType::Attachment, _ => DispositionType::Attachment,
@ -528,11 +528,26 @@ impl NamedFile {
length = ranges[0].length; length = ranges[0].length;
offset = ranges[0].start; offset = ranges[0].start;
// don't allow compression middleware to modify partial content // When a Content-Encoding header is present in a 206 partial content response
res.insert_header(( // for video content, it prevents browser video players from starting playback
header::CONTENT_ENCODING, // before loading the whole video and also prevents seeking.
HeaderValue::from_static("identity"), //
)); // See: https://github.com/actix/actix-web/issues/2815
//
// The assumption of this fix is that the video player knows to not send an
// Accept-Encoding header for this request and that downstream middleware will
// not attempt compression for requests without it.
//
// TODO: Solve question around what to do if self.encoding is set and partial
// range is requested. Reject request? Ignoring self.encoding seems wrong, too.
// In practice, it should not come up.
if req.headers().contains_key(&header::ACCEPT_ENCODING) {
// don't allow compression middleware to modify partial content
res.insert_header((
header::CONTENT_ENCODING,
HeaderValue::from_static("identity"),
));
}
res.insert_header(( res.insert_header((
header::CONTENT_RANGE, header::CONTENT_RANGE,

View File

@ -23,7 +23,7 @@ impl Deref for FilesService {
type Target = FilesServiceInner; type Target = FilesServiceInner;
fn deref(&self) -> &Self::Target { fn deref(&self) -> &Self::Target {
&*self.0 &self.0
} }
} }

View File

@ -1,11 +1,11 @@
use actix_files::Files; use actix_files::{Files, NamedFile};
use actix_web::{ use actix_web::{
http::{ http::{
header::{self, HeaderValue}, header::{self, HeaderValue},
StatusCode, StatusCode,
}, },
test::{self, TestRequest}, test::{self, TestRequest},
App, web, App,
}; };
#[actix_web::test] #[actix_web::test]
@ -36,3 +36,31 @@ async fn test_utf8_file_contents() {
Some(&HeaderValue::from_static("text/plain")), Some(&HeaderValue::from_static("text/plain")),
); );
} }
#[actix_web::test]
async fn partial_range_response_encoding() {
let srv = test::init_service(App::new().default_service(web::to(|| async {
NamedFile::open_async("./tests/test.binary").await.unwrap()
})))
.await;
// range request without accept-encoding returns no content-encoding header
let req = TestRequest::with_uri("/")
.append_header((header::RANGE, "bytes=10-20"))
.to_request();
let res = test::call_service(&srv, req).await;
assert_eq!(res.status(), StatusCode::PARTIAL_CONTENT);
assert!(!res.headers().contains_key(header::CONTENT_ENCODING));
// range request with accept-encoding returns a content-encoding header
let req = TestRequest::with_uri("/")
.append_header((header::RANGE, "bytes=10-20"))
.append_header((header::ACCEPT_ENCODING, "identity"))
.to_request();
let res = test::call_service(&srv, req).await;
assert_eq!(res.status(), StatusCode::PARTIAL_CONTENT);
assert_eq!(
res.headers().get(header::CONTENT_ENCODING).unwrap(),
"identity"
);
}

View File

@ -1,9 +1,25 @@
# Changes # Changes
## Unreleased - 2021-xx-xx ## Unreleased - 2022-xx-xx
- Minimum supported Rust version (MSRV) is now 1.56 due to transitive `hashbrown` dependency. - Minimum supported Rust version (MSRV) is now 1.59.
## 3.0.0 - 2022-07-24
- `TestServer::stop` is now async and will wait for the server and system to shutdown. [#2442]
- Added `TestServer::client_headers` method. [#2097]
- Update `actix-server` dependency to `2`.
- Update `actix-tls` dependency to `3`.
- Update `bytes` to `1.0`. [#1813]
- Minimum supported Rust version (MSRV) is now 1.57.
[#2442]: https://github.com/actix/actix-web/pull/2442
[#2097]: https://github.com/actix/actix-web/pull/2097
[#1813]: https://github.com/actix/actix-web/pull/1813
<details>
<summary>3.0.0 Pre-Releases</summary>
## 3.0.0-beta.13 - 2022-02-16 ## 3.0.0-beta.13 - 2022-02-16
- No significant changes since `3.0.0-beta.12`. - No significant changes since `3.0.0-beta.12`.
@ -69,6 +85,7 @@
[#1813]: https://github.com/actix/actix-web/pull/1813 [#1813]: https://github.com/actix/actix-web/pull/1813
</details>
## 2.1.0 - 2020-11-25 ## 2.1.0 - 2020-11-25
- Add ability to set address for `TestServer`. [#1645] - Add ability to set address for `TestServer`. [#1645]

View File

@ -1,6 +1,6 @@
[package] [package]
name = "actix-http-test" name = "actix-http-test"
version = "3.0.0-beta.13" version = "3.0.0"
authors = ["Nikolay Kim <fafhrd91@gmail.com>"] authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
description = "Various helpers for Actix applications to use during testing" description = "Various helpers for Actix applications to use during testing"
keywords = ["http", "web", "framework", "async", "futures"] keywords = ["http", "web", "framework", "async", "futures"]
@ -37,9 +37,8 @@ actix-rt = "2.2"
actix-server = "2" actix-server = "2"
awc = { version = "3", default-features = false } awc = { version = "3", default-features = false }
base64 = "0.13"
bytes = "1" bytes = "1"
futures-core = { version = "0.3.7", default-features = false } futures-core = { version = "0.3.17", default-features = false }
http = "0.2.5" http = "0.2.5"
log = "0.4" log = "0.4"
socket2 = "0.4" socket2 = "0.4"
@ -48,7 +47,7 @@ serde_json = "1.0"
slab = "0.4" slab = "0.4"
serde_urlencoded = "0.7" serde_urlencoded = "0.7"
tls-openssl = { version = "0.10.9", package = "openssl", optional = true } tls-openssl = { version = "0.10.9", package = "openssl", optional = true }
tokio = { version = "1.8.4", features = ["sync"] } tokio = { version = "1.18.4", features = ["sync"] }
[dev-dependencies] [dev-dependencies]
actix-web = { version = "4", default-features = false, features = ["cookies"] } actix-web = { version = "4", default-features = false, features = ["cookies"] }

View File

@ -3,11 +3,11 @@
> Various helpers for Actix applications to use during testing. > Various helpers for Actix applications to use during testing.
[![crates.io](https://img.shields.io/crates/v/actix-http-test?label=latest)](https://crates.io/crates/actix-http-test) [![crates.io](https://img.shields.io/crates/v/actix-http-test?label=latest)](https://crates.io/crates/actix-http-test)
[![Documentation](https://docs.rs/actix-http-test/badge.svg?version=3.0.0-beta.13)](https://docs.rs/actix-http-test/3.0.0-beta.13) [![Documentation](https://docs.rs/actix-http-test/badge.svg?version=3.0.0)](https://docs.rs/actix-http-test/3.0.0)
![Version](https://img.shields.io/badge/rustc-1.56+-ab6000.svg) ![Version](https://img.shields.io/badge/rustc-1.59+-ab6000.svg)
![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/actix-http-test) ![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/actix-http-test)
<br> <br>
[![Dependency Status](https://deps.rs/crate/actix-http-test/3.0.0-beta.13/status.svg)](https://deps.rs/crate/actix-http-test/3.0.0-beta.13) [![Dependency Status](https://deps.rs/crate/actix-http-test/3.0.0/status.svg)](https://deps.rs/crate/actix-http-test/3.0.0)
[![Download](https://img.shields.io/crates/d/actix-http-test.svg)](https://crates.io/crates/actix-http-test) [![Download](https://img.shields.io/crates/d/actix-http-test.svg)](https://crates.io/crates/actix-http-test)
[![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x) [![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x)

View File

@ -2,6 +2,7 @@
#![deny(rust_2018_idioms, nonstandard_style)] #![deny(rust_2018_idioms, nonstandard_style)]
#![warn(future_incompatible)] #![warn(future_incompatible)]
#![allow(clippy::uninlined_format_args)]
#![doc(html_logo_url = "https://actix.rs/img/logo.png")] #![doc(html_logo_url = "https://actix.rs/img/logo.png")]
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")] #![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
@ -87,6 +88,7 @@ pub async fn test_server_with_addr<F: ServerServiceFactory<TcpStream>>(
// notify TestServer that server and system have shut down // notify TestServer that server and system have shut down
// all thread managed resources should be dropped at this point // all thread managed resources should be dropped at this point
#[allow(clippy::let_underscore_future)]
let _ = thread_stop_tx.send(()); let _ = thread_stop_tx.send(());
}); });
@ -294,6 +296,7 @@ impl Drop for TestServer {
// without needing to await anything // without needing to await anything
// signal server to stop // signal server to stop
#[allow(clippy::let_underscore_future)]
let _ = self.server.stop(true); let _ = self.server.stop(true);
// signal system to stop // signal system to stop

View File

@ -1,6 +1,68 @@
# Changes # Changes
## Unreleased - 2021-xx-xx ## Unreleased - 2022-xx-xx
## 3.3.0 - 2023-01-21
### Added
- Implement `MessageBody` for `Cow<'static, str>` and `Cow<'static, [u8]>`. [#2959]
- Implement `MessageBody` for `&mut B` where `B: MessageBody + Unpin`. [#2868]
- Implement `MessageBody` for `Pin<B>` where `B::Target: MessageBody`. [#2868]
- Automatic h2c detection via new service finalizer `HttpService::tcp_auto_h2c()`. [#2957]
- `HeaderMap::retain()` [#2955].
- Header name constants in `header` module. [#2956] [#2968]
- `CACHE_STATUS`
- `CDN_CACHE_CONTROL`
- `CROSS_ORIGIN_EMBEDDER_POLICY`
- `CROSS_ORIGIN_OPENER_POLICY`
- `PERMISSIONS_POLICY`
- `X_FORWARDED_FOR`
- `X_FORWARDED_HOST`
- `X_FORWARDED_PROTO`
### Fixed
- Fix non-empty body of HTTP/2 HEAD responses. [#2920]
### Performance
- Improve overall performance of operations on `Extensions`. [#2890]
[#2959]: https://github.com/actix/actix-web/pull/2959
[#2868]: https://github.com/actix/actix-web/pull/2868
[#2890]: https://github.com/actix/actix-web/pull/2890
[#2920]: https://github.com/actix/actix-web/pull/2920
[#2957]: https://github.com/actix/actix-web/pull/2957
[#2955]: https://github.com/actix/actix-web/pull/2955
[#2956]: https://github.com/actix/actix-web/pull/2956
[#2968]: https://github.com/actix/actix-web/pull/2968
## 3.2.2 - 2022-09-11
### Changed
- Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
### Fixed
- Avoid possibility of dispatcher getting stuck while back-pressuring I/O. [#2369]
[#2369]: https://github.com/actix/actix-web/pull/2369
## 3.2.1 - 2022-07-02
### Fixed
- Fix parsing ambiguity in Transfer-Encoding and Content-Length headers for HTTP/1.0 requests. [#2794]
[#2794]: https://github.com/actix/actix-web/pull/2794
## 3.2.0 - 2022-06-30
### Changed
- Minimum supported Rust version (MSRV) is now 1.57 due to transitive `time` dependency.
### Fixed
- Websocket parser no longer throws endless overflow errors after receiving an oversized frame. [#2790]
- Retain previously set Vary headers when using compression encoder. [#2798]
[#2790]: https://github.com/actix/actix-web/pull/2790
[#2798]: https://github.com/actix/actix-web/pull/2798
## 3.1.0 - 2022-06-11 ## 3.1.0 - 2022-06-11
@ -10,9 +72,9 @@
### Fixed ### Fixed
- Revert broken fix in [#2624] that caused erroneous 500 error responses. Temporarily re-introduces [#2357] bug. [#2779] - Revert broken fix in [#2624] that caused erroneous 500 error responses. Temporarily re-introduces [#2357] bug. [#2779]
[#2624]: https://github.com/actix/actix-web/pull/2624
[#2357]: https://github.com/actix/actix-web/issues/2357 [#2357]: https://github.com/actix/actix-web/issues/2357
[#2624]: https://github.com/actix/actix-web/issues/2624 [#2779]: https://github.com/actix/actix-web/pull/2779
[#2779]: https://github.com/actix/actix-web/issues/2779
## 3.0.4 - 2022-03-09 ## 3.0.4 - 2022-03-09
@ -24,14 +86,14 @@
### Fixed ### Fixed
- Allow spaces between header name and colon when parsing responses. [#2684] - Allow spaces between header name and colon when parsing responses. [#2684]
[#2684]: https://github.com/actix/actix-web/issues/2684 [#2684]: https://github.com/actix/actix-web/pull/2684
## 3.0.2 - 2022-03-05 ## 3.0.2 - 2022-03-05
### Fixed ### Fixed
- Fix encoding camel-case header names with more than one hyphen. [#2683] - Fix encoding camel-case header names with more than one hyphen. [#2683]
[#2683]: https://github.com/actix/actix-web/issues/2683 [#2683]: https://github.com/actix/actix-web/pull/2683
## 3.0.1 - 2022-03-04 ## 3.0.1 - 2022-03-04

View File

@ -1,6 +1,6 @@
[package] [package]
name = "actix-http" name = "actix-http"
version = "3.1.0" version = "3.3.0"
authors = [ authors = [
"Nikolay Kim <fafhrd91@gmail.com>", "Nikolay Kim <fafhrd91@gmail.com>",
"Rob Ede <robjtede@icloud.com>", "Rob Ede <robjtede@icloud.com>",
@ -67,7 +67,7 @@ bytes = "1"
bytestring = "1" bytestring = "1"
derive_more = "0.99.5" derive_more = "0.99.5"
encoding_rs = "0.8" encoding_rs = "0.8"
futures-core = { version = "0.3.7", default-features = false, features = ["alloc"] } futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] }
http = "0.2.5" http = "0.2.5"
httparse = "1.5.1" httparse = "1.5.1"
httpdate = "1.0.1" httpdate = "1.0.1"
@ -77,6 +77,8 @@ mime = "0.3"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-project-lite = "0.2" pin-project-lite = "0.2"
smallvec = "1.6.1" smallvec = "1.6.1"
tokio = { version = "1.18.4", features = [] }
tokio-util = { version = "0.7", features = ["io", "codec"] }
tracing = { version = "0.1.30", default-features = false, features = ["log"] } tracing = { version = "0.1.30", default-features = false, features = ["log"] }
# http2 # http2
@ -84,7 +86,7 @@ h2 = { version = "0.3.9", optional = true }
# websockets # websockets
local-channel = { version = "0.1", optional = true } local-channel = { version = "0.1", optional = true }
base64 = { version = "0.13", optional = true } base64 = { version = "0.21", optional = true }
rand = { version = "0.8", optional = true } rand = { version = "0.8", optional = true }
sha1 = { version = "0.10", optional = true } sha1 = { version = "0.10", optional = true }
@ -94,30 +96,30 @@ actix-tls = { version = "3", default-features = false, optional = true }
# compress-* # compress-*
brotli = { version = "3.3.3", optional = true } brotli = { version = "3.3.3", optional = true }
flate2 = { version = "1.0.13", optional = true } flate2 = { version = "1.0.13", optional = true }
zstd = { version = "0.11", optional = true } zstd = { version = "0.12", optional = true }
[dev-dependencies] [dev-dependencies]
actix-http-test = { version = "3.0.0-beta.13", features = ["openssl"] } actix-http-test = { version = "3", features = ["openssl"] }
actix-server = "2" actix-server = "2"
actix-tls = { version = "3", features = ["openssl"] } actix-tls = { version = "3", features = ["openssl"] }
actix-web = "4" actix-web = "4"
async-stream = "0.3" async-stream = "0.3"
criterion = { version = "0.3", features = ["html_reports"] } criterion = { version = "0.4", features = ["html_reports"] }
env_logger = "0.9" env_logger = "0.9"
futures-util = { version = "0.3.7", default-features = false, features = ["alloc"] } futures-util = { version = "0.3.17", default-features = false, features = ["alloc"] }
memchr = "2.4" memchr = "2.4"
once_cell = "1.9" once_cell = "1.9"
rcgen = "0.8" rcgen = "0.9"
regex = "1.3" regex = "1.3"
rustversion = "1" rustversion = "1"
rustls-pemfile = "0.2" rustls-pemfile = "1"
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
static_assertions = "1" static_assertions = "1"
tls-openssl = { package = "openssl", version = "0.10.9" } tls-openssl = { package = "openssl", version = "0.10.9" }
tls-rustls = { package = "rustls", version = "0.20.0" } tls-rustls = { package = "rustls", version = "0.20.0" }
tokio = { version = "1.8.4", features = ["net", "rt", "macros"] } tokio = { version = "1.18.4", features = ["net", "rt", "macros"] }
[[example]] [[example]]
name = "ws" name = "ws"

View File

@ -3,11 +3,11 @@
> HTTP primitives for the Actix ecosystem. > HTTP primitives for the Actix ecosystem.
[![crates.io](https://img.shields.io/crates/v/actix-http?label=latest)](https://crates.io/crates/actix-http) [![crates.io](https://img.shields.io/crates/v/actix-http?label=latest)](https://crates.io/crates/actix-http)
[![Documentation](https://docs.rs/actix-http/badge.svg?version=3.1.0)](https://docs.rs/actix-http/3.1.0) [![Documentation](https://docs.rs/actix-http/badge.svg?version=3.3.0)](https://docs.rs/actix-http/3.3.0)
![Version](https://img.shields.io/badge/rustc-1.56+-ab6000.svg) ![Version](https://img.shields.io/badge/rustc-1.59+-ab6000.svg)
![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/actix-http.svg) ![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/actix-http.svg)
<br /> <br />
[![dependency status](https://deps.rs/crate/actix-http/3.1.0/status.svg)](https://deps.rs/crate/actix-http/3.1.0) [![dependency status](https://deps.rs/crate/actix-http/3.3.0/status.svg)](https://deps.rs/crate/actix-http/3.3.0)
[![Download](https://img.shields.io/crates/d/actix-http.svg)](https://crates.io/crates/actix-http) [![Download](https://img.shields.io/crates/d/actix-http.svg)](https://crates.io/crates/actix-http)
[![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x) [![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x)

View File

@ -1,3 +1,5 @@
#![allow(clippy::uninlined_format_args)]
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion}; use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};
const CODES: &[u16] = &[0, 1000, 201, 800, 550]; const CODES: &[u16] = &[0, 1000, 201, 800, 550];

View File

@ -0,0 +1,29 @@
//! An example that supports automatic selection of plaintext h1/h2c connections.
//!
//! Notably, both the following commands will work.
//! ```console
//! $ curl --http1.1 'http://localhost:8080/'
//! $ curl --http2-prior-knowledge 'http://localhost:8080/'
//! ```
use std::{convert::Infallible, io};
use actix_http::{HttpService, Request, Response, StatusCode};
use actix_server::Server;
#[tokio::main(flavor = "current_thread")]
async fn main() -> io::Result<()> {
env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));
Server::build()
.bind("h2c-detect", ("127.0.0.1", 8080), || {
HttpService::build()
.finish(|_req: Request| async move {
Ok::<_, Infallible>(Response::build(StatusCode::OK).body("Hello!"))
})
.tcp_auto_h2c()
})?
.workers(2)
.run()
.await
}

View File

@ -10,13 +10,13 @@ use std::{
time::Duration, time::Duration,
}; };
use actix_codec::Encoder;
use actix_http::{body::BodyStream, error::Error, ws, HttpService, Request, Response}; use actix_http::{body::BodyStream, error::Error, ws, HttpService, Request, Response};
use actix_rt::time::{interval, Interval}; use actix_rt::time::{interval, Interval};
use actix_server::Server; use actix_server::Server;
use bytes::{Bytes, BytesMut}; use bytes::{Bytes, BytesMut};
use bytestring::ByteString; use bytestring::ByteString;
use futures_core::{ready, Stream}; use futures_core::{ready, Stream};
use tokio_util::codec::Encoder;
use tracing::{info, trace}; use tracing::{info, trace};
#[actix_rt::main] #[actix_rt::main]

View File

@ -120,8 +120,28 @@ pub trait MessageBody {
} }
mod foreign_impls { mod foreign_impls {
use std::{borrow::Cow, ops::DerefMut};
use super::*; use super::*;
impl<B> MessageBody for &mut B
where
B: MessageBody + Unpin + ?Sized,
{
type Error = B::Error;
fn size(&self) -> BodySize {
(**self).size()
}
fn poll_next(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<Bytes, Self::Error>>> {
Pin::new(&mut **self).poll_next(cx)
}
}
impl MessageBody for Infallible { impl MessageBody for Infallible {
type Error = Infallible; type Error = Infallible;
@ -179,8 +199,9 @@ mod foreign_impls {
} }
} }
impl<B> MessageBody for Pin<Box<B>> impl<T, B> MessageBody for Pin<T>
where where
T: DerefMut<Target = B> + Unpin,
B: MessageBody + ?Sized, B: MessageBody + ?Sized,
{ {
type Error = B::Error; type Error = B::Error;
@ -303,6 +324,39 @@ mod foreign_impls {
} }
} }
impl MessageBody for Cow<'static, [u8]> {
type Error = Infallible;
#[inline]
fn size(&self) -> BodySize {
BodySize::Sized(self.len() as u64)
}
#[inline]
fn poll_next(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
) -> Poll<Option<Result<Bytes, Self::Error>>> {
if self.is_empty() {
Poll::Ready(None)
} else {
let bytes = match mem::take(self.get_mut()) {
Cow::Borrowed(b) => Bytes::from_static(b),
Cow::Owned(b) => Bytes::from(b),
};
Poll::Ready(Some(Ok(bytes)))
}
}
#[inline]
fn try_into_bytes(self) -> Result<Bytes, Self> {
match self {
Cow::Borrowed(b) => Ok(Bytes::from_static(b)),
Cow::Owned(b) => Ok(Bytes::from(b)),
}
}
}
impl MessageBody for &'static str { impl MessageBody for &'static str {
type Error = Infallible; type Error = Infallible;
@ -358,6 +412,39 @@ mod foreign_impls {
} }
} }
impl MessageBody for Cow<'static, str> {
type Error = Infallible;
#[inline]
fn size(&self) -> BodySize {
BodySize::Sized(self.len() as u64)
}
#[inline]
fn poll_next(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
) -> Poll<Option<Result<Bytes, Self::Error>>> {
if self.is_empty() {
Poll::Ready(None)
} else {
let bytes = match mem::take(self.get_mut()) {
Cow::Borrowed(s) => Bytes::from_static(s.as_bytes()),
Cow::Owned(s) => Bytes::from(s.into_bytes()),
};
Poll::Ready(Some(Ok(bytes)))
}
}
#[inline]
fn try_into_bytes(self) -> Result<Bytes, Self> {
match self {
Cow::Borrowed(s) => Ok(Bytes::from_static(s.as_bytes())),
Cow::Owned(s) => Ok(Bytes::from(s.into_bytes())),
}
}
}
impl MessageBody for bytestring::ByteString { impl MessageBody for bytestring::ByteString {
type Error = Infallible; type Error = Infallible;
@ -445,6 +532,7 @@ mod tests {
use actix_rt::pin; use actix_rt::pin;
use actix_utils::future::poll_fn; use actix_utils::future::poll_fn;
use bytes::{Bytes, BytesMut}; use bytes::{Bytes, BytesMut};
use futures_util::stream;
use super::*; use super::*;
use crate::body::{self, EitherBody}; use crate::body::{self, EitherBody};
@ -481,6 +569,35 @@ mod tests {
assert_poll_next_none!(pl); assert_poll_next_none!(pl);
} }
#[actix_rt::test]
async fn mut_equivalence() {
assert_eq!(().size(), BodySize::Sized(0));
assert_eq!(().size(), (&(&mut ())).size());
let pl = &mut ();
pin!(pl);
assert_poll_next_none!(pl);
let pl = &mut Box::new(());
pin!(pl);
assert_poll_next_none!(pl);
let mut body = body::SizedStream::new(
8,
stream::iter([
Ok::<_, std::io::Error>(Bytes::from("1234")),
Ok(Bytes::from("5678")),
]),
);
let body = &mut body;
assert_eq!(body.size(), BodySize::Sized(8));
pin!(body);
assert_poll_next!(body, Bytes::from_static(b"1234"));
assert_poll_next!(body, Bytes::from_static(b"5678"));
assert_poll_next_none!(body);
}
#[allow(clippy::let_unit_value)]
#[actix_rt::test] #[actix_rt::test]
async fn test_unit() { async fn test_unit() {
let pl = (); let pl = ();
@ -606,4 +723,18 @@ mod tests {
let not_body = resp_body.downcast_ref::<()>(); let not_body = resp_body.downcast_ref::<()>();
assert!(not_body.is_none()); assert!(not_body.is_none());
} }
#[actix_rt::test]
async fn non_owning_to_bytes() {
let mut body = BoxBody::new(());
let bytes = body::to_bytes(&mut body).await.unwrap();
assert_eq!(bytes, Bytes::new());
let mut body = body::BodyStream::new(stream::iter([
Ok::<_, std::io::Error>(Bytes::from("1234")),
Ok(Bytes::from("5678")),
]));
let bytes = body::to_bytes(&mut body).await.unwrap();
assert_eq!(bytes, Bytes::from_static(b"12345678"));
}
} }

View File

@ -44,7 +44,7 @@ where
#[inline] #[inline]
fn size(&self) -> BodySize { fn size(&self) -> BodySize {
BodySize::Sized(self.size as u64) BodySize::Sized(self.size)
} }
/// Attempts to pull out the next value of the underlying [`Stream`]. /// Attempts to pull out the next value of the underlying [`Stream`].

View File

@ -42,7 +42,7 @@ pub async fn to_bytes<B: MessageBody>(body: B) -> Result<Bytes, B::Error> {
let body = body.as_mut(); let body = body.as_mut();
match ready!(body.poll_next(cx)) { match ready!(body.poll_next(cx)) {
Some(Ok(bytes)) => buf.extend_from_slice(&*bytes), Some(Ok(bytes)) => buf.extend_from_slice(&bytes),
None => return Poll::Ready(Ok(())), None => return Poll::Ready(Ok(())),
Some(Err(err)) => return Poll::Ready(Err(err)), Some(Err(err)) => return Poll::Ready(Err(err)),
} }

View File

@ -186,7 +186,7 @@ where
self self
} }
/// Finish service configuration and create a HTTP Service for HTTP/1 protocol. /// Finish service configuration and create a service for the HTTP/1 protocol.
pub fn h1<F, B>(self, service: F) -> H1Service<T, S, B, X, U> pub fn h1<F, B>(self, service: F) -> H1Service<T, S, B, X, U>
where where
B: MessageBody, B: MessageBody,
@ -209,8 +209,9 @@ where
.on_connect_ext(self.on_connect_ext) .on_connect_ext(self.on_connect_ext)
} }
/// Finish service configuration and create a HTTP service for HTTP/2 protocol. /// Finish service configuration and create a service for the HTTP/2 protocol.
#[cfg(feature = "http2")] #[cfg(feature = "http2")]
#[cfg_attr(docsrs, doc(cfg(feature = "http2")))]
pub fn h2<F, B>(self, service: F) -> crate::h2::H2Service<T, S, B> pub fn h2<F, B>(self, service: F) -> crate::h2::H2Service<T, S, B>
where where
F: IntoServiceFactory<S, Request>, F: IntoServiceFactory<S, Request>,

View File

@ -35,7 +35,7 @@ impl Default for ServiceConfig {
} }
impl ServiceConfig { impl ServiceConfig {
/// Create instance of `ServiceConfig` /// Create instance of `ServiceConfig`.
pub fn new( pub fn new(
keep_alive: KeepAlive, keep_alive: KeepAlive,
client_request_timeout: Duration, client_request_timeout: Duration,

View File

@ -257,7 +257,7 @@ fn update_head(encoding: ContentEncoding, head: &mut ResponseHead) {
head.headers_mut() head.headers_mut()
.insert(header::CONTENT_ENCODING, encoding.to_header_value()); .insert(header::CONTENT_ENCODING, encoding.to_header_value());
head.headers_mut() head.headers_mut()
.insert(header::VARY, HeaderValue::from_static("accept-encoding")); .append(header::VARY, HeaderValue::from_static("accept-encoding"));
head.no_chunking(false); head.no_chunking(false);
} }

View File

@ -294,6 +294,7 @@ impl std::error::Error for PayloadError {
PayloadError::Overflow => None, PayloadError::Overflow => None,
PayloadError::UnknownLength => None, PayloadError::UnknownLength => None,
#[cfg(feature = "http2")] #[cfg(feature = "http2")]
#[cfg_attr(docsrs, doc(cfg(feature = "http2")))]
PayloadError::Http2Payload(err) => Some(err), PayloadError::Http2Payload(err) => Some(err),
PayloadError::Io(err) => Some(err), PayloadError::Io(err) => Some(err),
} }
@ -351,6 +352,7 @@ pub enum DispatchError {
/// HTTP/2 error. /// HTTP/2 error.
#[display(fmt = "{}", _0)] #[display(fmt = "{}", _0)]
#[cfg(feature = "http2")] #[cfg(feature = "http2")]
#[cfg_attr(docsrs, doc(cfg(feature = "http2")))]
H2(h2::Error), H2(h2::Error),
/// The first request did not complete within the specified timeout. /// The first request did not complete within the specified timeout.
@ -388,7 +390,7 @@ impl StdError for DispatchError {
/// A set of error that can occur during parsing content type. /// A set of error that can occur during parsing content type.
#[derive(Debug, Display, Error)] #[derive(Debug, Display, Error)]
#[cfg_attr(test, derive(PartialEq))] #[cfg_attr(test, derive(PartialEq, Eq))]
#[non_exhaustive] #[non_exhaustive]
pub enum ContentTypeError { pub enum ContentTypeError {
/// Can not parse content type /// Can not parse content type

View File

@ -1,9 +1,30 @@
use std::{ use std::{
any::{Any, TypeId}, any::{Any, TypeId},
collections::HashMap,
fmt, fmt,
hash::{BuildHasherDefault, Hasher},
}; };
use ahash::AHashMap; /// A hasher for `TypeId`s that takes advantage of its known characteristics.
///
/// Author of `anymap` crate has done research on the topic:
/// https://github.com/chris-morgan/anymap/blob/2e9a5704/src/lib.rs#L599
#[derive(Debug, Default)]
struct NoOpHasher(u64);
impl Hasher for NoOpHasher {
fn write(&mut self, _bytes: &[u8]) {
unimplemented!("This NoOpHasher can only handle u64s")
}
fn write_u64(&mut self, i: u64) {
self.0 = i;
}
fn finish(&self) -> u64 {
self.0
}
}
/// A type map for request extensions. /// A type map for request extensions.
/// ///
@ -11,7 +32,7 @@ use ahash::AHashMap;
#[derive(Default)] #[derive(Default)]
pub struct Extensions { pub struct Extensions {
/// Use AHasher with a std HashMap with for faster lookups on the small `TypeId` keys. /// Use AHasher with a std HashMap with for faster lookups on the small `TypeId` keys.
map: AHashMap<TypeId, Box<dyn Any>>, map: HashMap<TypeId, Box<dyn Any>, BuildHasherDefault<NoOpHasher>>,
} }
impl Extensions { impl Extensions {
@ -19,7 +40,7 @@ impl Extensions {
#[inline] #[inline]
pub fn new() -> Extensions { pub fn new() -> Extensions {
Extensions { Extensions {
map: AHashMap::new(), map: HashMap::default(),
} }
} }

View File

@ -15,7 +15,7 @@ macro_rules! byte (
}) })
); );
#[derive(Debug, PartialEq, Clone)] #[derive(Debug, Clone, PartialEq, Eq)]
pub(super) enum ChunkedState { pub(super) enum ChunkedState {
Size, Size,
SizeLws, SizeLws,
@ -71,7 +71,7 @@ impl ChunkedState {
match size.checked_mul(radix) { match size.checked_mul(radix) {
Some(n) => { Some(n) => {
*size = n as u64; *size = n;
*size += rem as u64; *size += rem as u64;
Poll::Ready(Ok(ChunkedState::Size)) Poll::Ready(Ok(ChunkedState::Size))

View File

@ -1,9 +1,9 @@
use std::{fmt, io}; use std::{fmt, io};
use actix_codec::{Decoder, Encoder};
use bitflags::bitflags; use bitflags::bitflags;
use bytes::{Bytes, BytesMut}; use bytes::{Bytes, BytesMut};
use http::{Method, Version}; use http::{Method, Version};
use tokio_util::codec::{Decoder, Encoder};
use super::{ use super::{
decoder::{self, PayloadDecoder, PayloadItem, PayloadType}, decoder::{self, PayloadDecoder, PayloadItem, PayloadType},

View File

@ -1,9 +1,9 @@
use std::{fmt, io}; use std::{fmt, io};
use actix_codec::{Decoder, Encoder};
use bitflags::bitflags; use bitflags::bitflags;
use bytes::BytesMut; use bytes::BytesMut;
use http::{Method, Version}; use http::{Method, Version};
use tokio_util::codec::{Decoder, Encoder};
use super::{ use super::{
decoder::{self, PayloadDecoder, PayloadItem, PayloadType}, decoder::{self, PayloadDecoder, PayloadItem, PayloadType},

View File

@ -46,6 +46,23 @@ pub(crate) enum PayloadLength {
None, None,
} }
impl PayloadLength {
/// Returns true if variant is `None`.
fn is_none(&self) -> bool {
matches!(self, Self::None)
}
/// Returns true if variant is represents zero-length (not none) payload.
fn is_zero(&self) -> bool {
matches!(
self,
PayloadLength::Payload(PayloadType::Payload(PayloadDecoder {
kind: Kind::Length(0)
}))
)
}
}
pub(crate) trait MessageType: Sized { pub(crate) trait MessageType: Sized {
fn set_connection_type(&mut self, conn_type: Option<ConnectionType>); fn set_connection_type(&mut self, conn_type: Option<ConnectionType>);
@ -59,6 +76,7 @@ pub(crate) trait MessageType: Sized {
&mut self, &mut self,
slice: &Bytes, slice: &Bytes,
raw_headers: &[HeaderIndex], raw_headers: &[HeaderIndex],
version: Version,
) -> Result<PayloadLength, ParseError> { ) -> Result<PayloadLength, ParseError> {
let mut ka = None; let mut ka = None;
let mut has_upgrade_websocket = false; let mut has_upgrade_websocket = false;
@ -87,21 +105,23 @@ pub(crate) trait MessageType: Sized {
return Err(ParseError::Header); return Err(ParseError::Header);
} }
header::CONTENT_LENGTH => match value.to_str() { header::CONTENT_LENGTH => match value.to_str().map(str::trim) {
Ok(s) if s.trim().starts_with('+') => { Ok(val) if val.starts_with('+') => {
debug!("illegal Content-Length: {:?}", s); debug!("illegal Content-Length: {:?}", val);
return Err(ParseError::Header); return Err(ParseError::Header);
} }
Ok(s) => {
if let Ok(len) = s.parse::<u64>() { Ok(val) => {
if len != 0 { if let Ok(len) = val.parse::<u64>() {
content_length = Some(len); // accept 0 lengths here and remove them in `decode` after all
} // headers have been processed to prevent request smuggling issues
content_length = Some(len);
} else { } else {
debug!("illegal Content-Length: {:?}", s); debug!("illegal Content-Length: {:?}", val);
return Err(ParseError::Header); return Err(ParseError::Header);
} }
} }
Err(_) => { Err(_) => {
debug!("illegal Content-Length: {:?}", value); debug!("illegal Content-Length: {:?}", value);
return Err(ParseError::Header); return Err(ParseError::Header);
@ -114,22 +134,23 @@ pub(crate) trait MessageType: Sized {
return Err(ParseError::Header); return Err(ParseError::Header);
} }
header::TRANSFER_ENCODING => { header::TRANSFER_ENCODING if version == Version::HTTP_11 => {
seen_te = true; seen_te = true;
if let Ok(s) = value.to_str().map(str::trim) { if let Ok(val) = value.to_str().map(str::trim) {
if s.eq_ignore_ascii_case("chunked") { if val.eq_ignore_ascii_case("chunked") {
chunked = true; chunked = true;
} else if s.eq_ignore_ascii_case("identity") { } else if val.eq_ignore_ascii_case("identity") {
// allow silently since multiple TE headers are already checked // allow silently since multiple TE headers are already checked
} else { } else {
debug!("illegal Transfer-Encoding: {:?}", s); debug!("illegal Transfer-Encoding: {:?}", val);
return Err(ParseError::Header); return Err(ParseError::Header);
} }
} else { } else {
return Err(ParseError::Header); return Err(ParseError::Header);
} }
} }
// connection keep-alive state // connection keep-alive state
header::CONNECTION => { header::CONNECTION => {
ka = if let Ok(conn) = value.to_str().map(str::trim) { ka = if let Ok(conn) = value.to_str().map(str::trim) {
@ -146,6 +167,7 @@ pub(crate) trait MessageType: Sized {
None None
}; };
} }
header::UPGRADE => { header::UPGRADE => {
if let Ok(val) = value.to_str().map(str::trim) { if let Ok(val) = value.to_str().map(str::trim) {
if val.eq_ignore_ascii_case("websocket") { if val.eq_ignore_ascii_case("websocket") {
@ -153,19 +175,23 @@ pub(crate) trait MessageType: Sized {
} }
} }
} }
header::EXPECT => { header::EXPECT => {
let bytes = value.as_bytes(); let bytes = value.as_bytes();
if bytes.len() >= 4 && &bytes[0..4] == b"100-" { if bytes.len() >= 4 && &bytes[0..4] == b"100-" {
expect = true; expect = true;
} }
} }
_ => {} _ => {}
} }
headers.append(name, value); headers.append(name, value);
} }
} }
self.set_connection_type(ka); self.set_connection_type(ka);
if expect { if expect {
self.set_expect() self.set_expect()
} }
@ -249,7 +275,22 @@ impl MessageType for Request {
let mut msg = Request::new(); let mut msg = Request::new();
// convert headers // convert headers
let length = msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len])?; let mut length =
msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len], ver)?;
// disallow HTTP/1.0 POST requests that do not contain a Content-Length headers
// see https://datatracker.ietf.org/doc/html/rfc1945#section-7.2.2
if ver == Version::HTTP_10 && method == Method::POST && length.is_none() {
debug!("no Content-Length specified for HTTP/1.0 POST request");
return Err(ParseError::Header);
}
// Remove CL value if 0 now that all headers and HTTP/1.0 special cases are processed.
// Protects against some request smuggling attacks.
// See https://github.com/actix/actix-web/issues/2767.
if length.is_zero() {
length = PayloadLength::None;
}
// payload decoder // payload decoder
let decoder = match length { let decoder = match length {
@ -337,7 +378,15 @@ impl MessageType for ResponseHead {
msg.version = ver; msg.version = ver;
// convert headers // convert headers
let length = msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len])?; let mut length =
msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len], ver)?;
// Remove CL value if 0 now that all headers and HTTP/1.0 special cases are processed.
// Protects against some request smuggling attacks.
// See https://github.com/actix/actix-web/issues/2767.
if length.is_zero() {
length = PayloadLength::None;
}
// message payload // message payload
let decoder = if let PayloadLength::Payload(pl) = length { let decoder = if let PayloadLength::Payload(pl) = length {
@ -391,7 +440,7 @@ impl HeaderIndex {
} }
} }
#[derive(Debug, Clone, PartialEq)] #[derive(Debug, Clone, PartialEq, Eq)]
/// Chunk type yielded while decoding a payload. /// Chunk type yielded while decoding a payload.
pub enum PayloadItem { pub enum PayloadItem {
Chunk(Bytes), Chunk(Bytes),
@ -401,7 +450,7 @@ pub enum PayloadItem {
/// Decoder that can handle different payload types. /// Decoder that can handle different payload types.
/// ///
/// If a message body does not use `Transfer-Encoding`, it should include a `Content-Length`. /// If a message body does not use `Transfer-Encoding`, it should include a `Content-Length`.
#[derive(Debug, Clone, PartialEq)] #[derive(Debug, Clone, PartialEq, Eq)]
pub struct PayloadDecoder { pub struct PayloadDecoder {
kind: Kind, kind: Kind,
} }
@ -427,7 +476,7 @@ impl PayloadDecoder {
} }
} }
#[derive(Debug, Clone, PartialEq)] #[derive(Debug, Clone, PartialEq, Eq)]
enum Kind { enum Kind {
/// A reader used when a `Content-Length` header is passed with a positive integer. /// A reader used when a `Content-Length` header is passed with a positive integer.
Length(u64), Length(u64),
@ -606,14 +655,100 @@ mod tests {
} }
#[test] #[test]
fn test_parse_post() { fn parse_h09_reject() {
let mut buf = BytesMut::from("POST /test2 HTTP/1.0\r\n\r\n"); let mut buf = BytesMut::from(
"GET /test1 HTTP/0.9\r\n\
\r\n",
);
let mut reader = MessageDecoder::<Request>::default();
reader.decode(&mut buf).unwrap_err();
let mut buf = BytesMut::from(
"POST /test2 HTTP/0.9\r\n\
Content-Length: 3\r\n\
\r\n
abc",
);
let mut reader = MessageDecoder::<Request>::default();
reader.decode(&mut buf).unwrap_err();
}
#[test]
fn parse_h10_get() {
let mut buf = BytesMut::from(
"GET /test1 HTTP/1.0\r\n\
\r\n",
);
let mut reader = MessageDecoder::<Request>::default();
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
assert_eq!(req.version(), Version::HTTP_10);
assert_eq!(*req.method(), Method::GET);
assert_eq!(req.path(), "/test1");
let mut buf = BytesMut::from(
"GET /test2 HTTP/1.0\r\n\
Content-Length: 0\r\n\
\r\n",
);
let mut reader = MessageDecoder::<Request>::default();
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
assert_eq!(req.version(), Version::HTTP_10);
assert_eq!(*req.method(), Method::GET);
assert_eq!(req.path(), "/test2");
let mut buf = BytesMut::from(
"GET /test3 HTTP/1.0\r\n\
Content-Length: 3\r\n\
\r\n
abc",
);
let mut reader = MessageDecoder::<Request>::default();
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
assert_eq!(req.version(), Version::HTTP_10);
assert_eq!(*req.method(), Method::GET);
assert_eq!(req.path(), "/test3");
}
#[test]
fn parse_h10_post() {
let mut buf = BytesMut::from(
"POST /test1 HTTP/1.0\r\n\
Content-Length: 3\r\n\
\r\n\
abc",
);
let mut reader = MessageDecoder::<Request>::default();
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
assert_eq!(req.version(), Version::HTTP_10);
assert_eq!(*req.method(), Method::POST);
assert_eq!(req.path(), "/test1");
let mut buf = BytesMut::from(
"POST /test2 HTTP/1.0\r\n\
Content-Length: 0\r\n\
\r\n",
);
let mut reader = MessageDecoder::<Request>::default(); let mut reader = MessageDecoder::<Request>::default();
let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
assert_eq!(req.version(), Version::HTTP_10); assert_eq!(req.version(), Version::HTTP_10);
assert_eq!(*req.method(), Method::POST); assert_eq!(*req.method(), Method::POST);
assert_eq!(req.path(), "/test2"); assert_eq!(req.path(), "/test2");
let mut buf = BytesMut::from(
"POST /test3 HTTP/1.0\r\n\
\r\n",
);
let mut reader = MessageDecoder::<Request>::default();
let err = reader.decode(&mut buf).unwrap_err();
assert!(err.to_string().contains("Header"))
} }
#[test] #[test]
@ -709,121 +844,98 @@ mod tests {
#[test] #[test]
fn test_conn_default_1_0() { fn test_conn_default_1_0() {
let mut buf = BytesMut::from("GET /test HTTP/1.0\r\n\r\n"); let req = parse_ready!(&mut BytesMut::from("GET /test HTTP/1.0\r\n\r\n"));
let req = parse_ready!(&mut buf);
assert_eq!(req.head().connection_type(), ConnectionType::Close); assert_eq!(req.head().connection_type(), ConnectionType::Close);
} }
#[test] #[test]
fn test_conn_default_1_1() { fn test_conn_default_1_1() {
let mut buf = BytesMut::from("GET /test HTTP/1.1\r\n\r\n"); let req = parse_ready!(&mut BytesMut::from("GET /test HTTP/1.1\r\n\r\n"));
let req = parse_ready!(&mut buf);
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
} }
#[test] #[test]
fn test_conn_close() { fn test_conn_close() {
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\ "GET /test HTTP/1.1\r\n\
connection: close\r\n\r\n", connection: close\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert_eq!(req.head().connection_type(), ConnectionType::Close); assert_eq!(req.head().connection_type(), ConnectionType::Close);
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\ "GET /test HTTP/1.1\r\n\
connection: Close\r\n\r\n", connection: Close\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert_eq!(req.head().connection_type(), ConnectionType::Close); assert_eq!(req.head().connection_type(), ConnectionType::Close);
} }
#[test] #[test]
fn test_conn_close_1_0() { fn test_conn_close_1_0() {
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.0\r\n\ "GET /test HTTP/1.0\r\n\
connection: close\r\n\r\n", connection: close\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert_eq!(req.head().connection_type(), ConnectionType::Close); assert_eq!(req.head().connection_type(), ConnectionType::Close);
} }
#[test] #[test]
fn test_conn_keep_alive_1_0() { fn test_conn_keep_alive_1_0() {
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.0\r\n\ "GET /test HTTP/1.0\r\n\
connection: keep-alive\r\n\r\n", connection: keep-alive\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.0\r\n\ "GET /test HTTP/1.0\r\n\
connection: Keep-Alive\r\n\r\n", connection: Keep-Alive\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
} }
#[test] #[test]
fn test_conn_keep_alive_1_1() { fn test_conn_keep_alive_1_1() {
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\ "GET /test HTTP/1.1\r\n\
connection: keep-alive\r\n\r\n", connection: keep-alive\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
} }
#[test] #[test]
fn test_conn_other_1_0() { fn test_conn_other_1_0() {
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.0\r\n\ "GET /test HTTP/1.0\r\n\
connection: other\r\n\r\n", connection: other\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert_eq!(req.head().connection_type(), ConnectionType::Close); assert_eq!(req.head().connection_type(), ConnectionType::Close);
} }
#[test] #[test]
fn test_conn_other_1_1() { fn test_conn_other_1_1() {
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\ "GET /test HTTP/1.1\r\n\
connection: other\r\n\r\n", connection: other\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
} }
#[test] #[test]
fn test_conn_upgrade() { fn test_conn_upgrade() {
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\ "GET /test HTTP/1.1\r\n\
upgrade: websockets\r\n\ upgrade: websockets\r\n\
connection: upgrade\r\n\r\n", connection: upgrade\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert!(req.upgrade()); assert!(req.upgrade());
assert_eq!(req.head().connection_type(), ConnectionType::Upgrade); assert_eq!(req.head().connection_type(), ConnectionType::Upgrade);
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\ "GET /test HTTP/1.1\r\n\
upgrade: Websockets\r\n\ upgrade: Websockets\r\n\
connection: Upgrade\r\n\r\n", connection: Upgrade\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert!(req.upgrade()); assert!(req.upgrade());
assert_eq!(req.head().connection_type(), ConnectionType::Upgrade); assert_eq!(req.head().connection_type(), ConnectionType::Upgrade);
@ -831,59 +943,62 @@ mod tests {
#[test] #[test]
fn test_conn_upgrade_connect_method() { fn test_conn_upgrade_connect_method() {
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"CONNECT /test HTTP/1.1\r\n\ "CONNECT /test HTTP/1.1\r\n\
content-type: text/plain\r\n\r\n", content-type: text/plain\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert!(req.upgrade()); assert!(req.upgrade());
} }
#[test] #[test]
fn test_headers_content_length_err_1() { fn test_headers_bad_content_length() {
let mut buf = BytesMut::from( // string CL
expect_parse_err!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\ "GET /test HTTP/1.1\r\n\
content-length: line\r\n\r\n", content-length: line\r\n\r\n",
); ));
expect_parse_err!(&mut buf) // negative CL
expect_parse_err!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\
content-length: -1\r\n\r\n",
));
} }
#[test] #[test]
fn test_headers_content_length_err_2() { fn octal_ish_cl_parsed_as_decimal() {
let mut buf = BytesMut::from( let mut buf = BytesMut::from(
"GET /test HTTP/1.1\r\n\ "POST /test HTTP/1.1\r\n\
content-length: -1\r\n\r\n", content-length: 011\r\n\r\n",
); );
let mut reader = MessageDecoder::<Request>::default();
expect_parse_err!(&mut buf); let (_req, pl) = reader.decode(&mut buf).unwrap().unwrap();
assert!(matches!(
pl,
PayloadType::Payload(pl) if pl == PayloadDecoder::length(11)
));
} }
#[test] #[test]
fn test_invalid_header() { fn test_invalid_header() {
let mut buf = BytesMut::from( expect_parse_err!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\ "GET /test HTTP/1.1\r\n\
test line\r\n\r\n", test line\r\n\r\n",
); ));
expect_parse_err!(&mut buf);
} }
#[test] #[test]
fn test_invalid_name() { fn test_invalid_name() {
let mut buf = BytesMut::from( expect_parse_err!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\ "GET /test HTTP/1.1\r\n\
test[]: line\r\n\r\n", test[]: line\r\n\r\n",
); ));
expect_parse_err!(&mut buf);
} }
#[test] #[test]
fn test_http_request_bad_status_line() { fn test_http_request_bad_status_line() {
let mut buf = BytesMut::from("getpath \r\n\r\n"); expect_parse_err!(&mut BytesMut::from("getpath \r\n\r\n"));
expect_parse_err!(&mut buf);
} }
#[test] #[test]
@ -923,11 +1038,10 @@ mod tests {
#[test] #[test]
fn test_http_request_parser_utf8() { fn test_http_request_parser_utf8() {
let mut buf = BytesMut::from( let req = parse_ready!(&mut BytesMut::from(
"GET /test HTTP/1.1\r\n\ "GET /test HTTP/1.1\r\n\
x-test: тест\r\n\r\n", x-test: тест\r\n\r\n",
); ));
let req = parse_ready!(&mut buf);
assert_eq!( assert_eq!(
req.headers().get("x-test").unwrap().as_bytes(), req.headers().get("x-test").unwrap().as_bytes(),
@ -937,24 +1051,18 @@ mod tests {
#[test] #[test]
fn test_http_request_parser_two_slashes() { fn test_http_request_parser_two_slashes() {
let mut buf = BytesMut::from("GET //path HTTP/1.1\r\n\r\n"); let req = parse_ready!(&mut BytesMut::from("GET //path HTTP/1.1\r\n\r\n"));
let req = parse_ready!(&mut buf);
assert_eq!(req.path(), "//path"); assert_eq!(req.path(), "//path");
} }
#[test] #[test]
fn test_http_request_parser_bad_method() { fn test_http_request_parser_bad_method() {
let mut buf = BytesMut::from("!12%()+=~$ /get HTTP/1.1\r\n\r\n"); expect_parse_err!(&mut BytesMut::from("!12%()+=~$ /get HTTP/1.1\r\n\r\n"));
expect_parse_err!(&mut buf);
} }
#[test] #[test]
fn test_http_request_parser_bad_version() { fn test_http_request_parser_bad_version() {
let mut buf = BytesMut::from("GET //get HT/11\r\n\r\n"); expect_parse_err!(&mut BytesMut::from("GET //get HT/11\r\n\r\n"));
expect_parse_err!(&mut buf);
} }
#[test] #[test]
@ -971,29 +1079,66 @@ mod tests {
#[test] #[test]
fn hrs_multiple_content_length() { fn hrs_multiple_content_length() {
let mut buf = BytesMut::from( expect_parse_err!(&mut BytesMut::from(
"GET / HTTP/1.1\r\n\ "GET / HTTP/1.1\r\n\
Host: example.com\r\n\ Host: example.com\r\n\
Content-Length: 4\r\n\ Content-Length: 4\r\n\
Content-Length: 2\r\n\ Content-Length: 2\r\n\
\r\n\ \r\n\
abcd", abcd",
); ));
expect_parse_err!(&mut buf); expect_parse_err!(&mut BytesMut::from(
"GET / HTTP/1.1\r\n\
Host: example.com\r\n\
Content-Length: 0\r\n\
Content-Length: 2\r\n\
\r\n\
ab",
));
} }
#[test] #[test]
fn hrs_content_length_plus() { fn hrs_content_length_plus() {
let mut buf = BytesMut::from( expect_parse_err!(&mut BytesMut::from(
"GET / HTTP/1.1\r\n\ "GET / HTTP/1.1\r\n\
Host: example.com\r\n\ Host: example.com\r\n\
Content-Length: +3\r\n\ Content-Length: +3\r\n\
\r\n\ \r\n\
000", 000",
));
}
#[test]
fn hrs_te_http10() {
// in HTTP/1.0 transfer encoding is ignored and must therefore contain a CL header
expect_parse_err!(&mut BytesMut::from(
"POST / HTTP/1.0\r\n\
Host: example.com\r\n\
Transfer-Encoding: chunked\r\n\
\r\n\
3\r\n\
aaa\r\n\
0\r\n\
",
));
}
#[test]
fn hrs_cl_and_te_http10() {
// in HTTP/1.0 transfer encoding is simply ignored so it's fine to have both
let mut buf = BytesMut::from(
"GET / HTTP/1.0\r\n\
Host: example.com\r\n\
Content-Length: 3\r\n\
Transfer-Encoding: chunked\r\n\
\r\n\
000",
); );
expect_parse_err!(&mut buf); parse_ready!(&mut buf);
} }
#[test] #[test]

View File

@ -8,13 +8,15 @@ use std::{
task::{Context, Poll}, task::{Context, Poll},
}; };
use actix_codec::{AsyncRead, AsyncWrite, Decoder as _, Encoder as _, Framed, FramedParts}; use actix_codec::{Framed, FramedParts};
use actix_rt::time::sleep_until; use actix_rt::time::sleep_until;
use actix_service::Service; use actix_service::Service;
use bitflags::bitflags; use bitflags::bitflags;
use bytes::{Buf, BytesMut}; use bytes::{Buf, BytesMut};
use futures_core::ready; use futures_core::ready;
use pin_project_lite::pin_project; use pin_project_lite::pin_project;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_util::codec::{Decoder as _, Encoder as _};
use tracing::{error, trace}; use tracing::{error, trace};
use crate::{ use crate::{
@ -976,9 +978,11 @@ where
// //
// A Request head too large to parse is only checked on `httparse::Status::Partial`. // A Request head too large to parse is only checked on `httparse::Status::Partial`.
if this.payload.is_none() { match this.payload {
// When dispatcher has a payload the responsibility of wake up it would be shift // When dispatcher has a payload the responsibility of wake ups is shifted to
// to h1::payload::Payload. // `h1::payload::Payload` unless the payload is needing a read, in which case it
// might not have access to the waker and could result in the dispatcher
// getting stuck until timeout.
// //
// Reason: // Reason:
// Self wake up when there is payload would waste poll and/or result in // Self wake up when there is payload would waste poll and/or result in
@ -989,7 +993,8 @@ where
// read anymore. At this case read_buf could always remain beyond // read anymore. At this case read_buf could always remain beyond
// MAX_BUFFER_SIZE and self wake up would be busy poll dispatcher and // MAX_BUFFER_SIZE and self wake up would be busy poll dispatcher and
// waste resources. // waste resources.
cx.waker().wake_by_ref(); Some(ref p) if p.need_read(cx) != PayloadStatus::Read => {}
_ => cx.waker().wake_by_ref(),
} }
return Ok(false); return Ok(false);
@ -1001,7 +1006,7 @@ where
this.read_buf.reserve(HW_BUFFER_SIZE - remaining); this.read_buf.reserve(HW_BUFFER_SIZE - remaining);
} }
match actix_codec::poll_read_buf(io.as_mut(), cx, this.read_buf) { match tokio_util::io::poll_read_buf(io.as_mut(), cx, this.read_buf) {
Poll::Ready(Ok(n)) => { Poll::Ready(Ok(n)) => {
this.flags.remove(Flags::FINISHED); this.flags.remove(Flags::FINISHED);

View File

@ -64,7 +64,7 @@ fn drop_payload_service(
fn echo_payload_service() -> impl Service<Request, Response = Response<Bytes>, Error = Error> { fn echo_payload_service() -> impl Service<Request, Response = Response<Bytes>, Error = Error> {
fn_service(|mut req: Request| { fn_service(|mut req: Request| {
Box::pin(async move { Box::pin(async move {
use futures_util::stream::StreamExt as _; use futures_util::StreamExt as _;
let mut pl = req.take_payload(); let mut pl = req.take_payload();
let mut body = BytesMut::new(); let mut body = BytesMut::new();
@ -637,7 +637,7 @@ async fn expect_handling() {
if let DispatcherState::Normal { ref inner } = h1.inner { if let DispatcherState::Normal { ref inner } = h1.inner {
let io = inner.io.as_ref().unwrap(); let io = inner.io.as_ref().unwrap();
let mut res = (&io.write_buf()[..]).to_owned(); let mut res = io.write_buf()[..].to_owned();
stabilize_date_header(&mut res); stabilize_date_header(&mut res);
assert_eq!( assert_eq!(
@ -699,7 +699,7 @@ async fn expect_eager() {
if let DispatcherState::Normal { ref inner } = h1.inner { if let DispatcherState::Normal { ref inner } = h1.inner {
let io = inner.io.as_ref().unwrap(); let io = inner.io.as_ref().unwrap();
let mut res = (&io.write_buf()[..]).to_owned(); let mut res = io.write_buf()[..].to_owned();
stabilize_date_header(&mut res); stabilize_date_header(&mut res);
// Despite the content-length header and even though the request payload has not // Despite the content-length header and even though the request payload has not

View File

@ -450,7 +450,7 @@ impl TransferEncoding {
buf.extend_from_slice(&msg[..len as usize]); buf.extend_from_slice(&msg[..len as usize]);
*remaining -= len as u64; *remaining -= len;
Ok(*remaining == 0) Ok(*remaining == 0)
} else { } else {
Ok(true) Ok(true)

View File

@ -16,7 +16,7 @@ use crate::error::PayloadError;
/// max buffer size 32k /// max buffer size 32k
pub(crate) const MAX_BUFFER_SIZE: usize = 32_768; pub(crate) const MAX_BUFFER_SIZE: usize = 32_768;
#[derive(Debug, PartialEq)] #[derive(Debug, PartialEq, Eq)]
pub enum PayloadStatus { pub enum PayloadStatus {
Read, Read,
Pause, Pause,
@ -252,19 +252,15 @@ impl Inner {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::panic::{RefUnwindSafe, UnwindSafe};
use actix_utils::future::poll_fn; use actix_utils::future::poll_fn;
use static_assertions::{assert_impl_all, assert_not_impl_any}; use static_assertions::{assert_impl_all, assert_not_impl_any};
use super::*; use super::*;
assert_impl_all!(Payload: Unpin); assert_impl_all!(Payload: Unpin);
assert_not_impl_any!(Payload: Send, Sync, UnwindSafe, RefUnwindSafe); assert_not_impl_any!(Payload: Send, Sync);
assert_impl_all!(Inner: Unpin, Send, Sync); assert_impl_all!(Inner: Unpin, Send, Sync);
// assertion not stable wrt rustc versions yet
// assert_impl_all!(Inner: UnwindSafe, RefUnwindSafe);
#[actix_rt::test] #[actix_rt::test]
async fn test_unread_data() { async fn test_unread_data() {

View File

@ -134,6 +134,7 @@ mod openssl {
U::InitError: fmt::Debug, U::InitError: fmt::Debug,
{ {
/// Create OpenSSL based service. /// Create OpenSSL based service.
#[cfg_attr(docsrs, doc(cfg(feature = "openssl")))]
pub fn openssl( pub fn openssl(
self, self,
acceptor: SslAcceptor, acceptor: SslAcceptor,
@ -196,6 +197,7 @@ mod rustls {
U::InitError: fmt::Debug, U::InitError: fmt::Debug,
{ {
/// Create Rustls based service. /// Create Rustls based service.
#[cfg_attr(docsrs, doc(cfg(feature = "rustls")))]
pub fn rustls( pub fn rustls(
self, self,
config: ServerConfig, config: ServerConfig,

View File

@ -29,7 +29,7 @@ use crate::{
HeaderName, HeaderValue, CONNECTION, CONTENT_LENGTH, DATE, TRANSFER_ENCODING, UPGRADE, HeaderName, HeaderValue, CONNECTION, CONTENT_LENGTH, DATE, TRANSFER_ENCODING, UPGRADE,
}, },
service::HttpFlow, service::HttpFlow,
Extensions, OnConnectData, Payload, Request, Response, ResponseHead, Extensions, Method, OnConnectData, Payload, Request, Response, ResponseHead,
}; };
const CHUNK_SIZE: usize = 16_384; const CHUNK_SIZE: usize = 16_384;
@ -67,7 +67,7 @@ where
timer timer
}) })
.unwrap_or_else(|| Box::pin(sleep(dur))), .unwrap_or_else(|| Box::pin(sleep(dur))),
on_flight: false, in_flight: false,
ping_pong: conn.ping_pong().unwrap(), ping_pong: conn.ping_pong().unwrap(),
}); });
@ -84,9 +84,14 @@ where
} }
struct H2PingPong { struct H2PingPong {
timer: Pin<Box<Sleep>>, /// Handle to send ping frames from the peer.
on_flight: bool,
ping_pong: PingPong, ping_pong: PingPong,
/// True when a ping has been sent and is waiting for a reply.
in_flight: bool,
/// Timeout for pong response.
timer: Pin<Box<Sleep>>,
} }
impl<T, S, B, X, U> Future for Dispatcher<T, S, B, X, U> impl<T, S, B, X, U> Future for Dispatcher<T, S, B, X, U>
@ -113,6 +118,7 @@ where
let payload = crate::h2::Payload::new(body); let payload = crate::h2::Payload::new(body);
let pl = Payload::H2 { payload }; let pl = Payload::H2 { payload };
let mut req = Request::with_payload(pl); let mut req = Request::with_payload(pl);
let head_req = parts.method == Method::HEAD;
let head = req.head_mut(); let head = req.head_mut();
head.uri = parts.uri; head.uri = parts.uri;
@ -130,10 +136,10 @@ where
actix_rt::spawn(async move { actix_rt::spawn(async move {
// resolve service call and send response. // resolve service call and send response.
let res = match fut.await { let res = match fut.await {
Ok(res) => handle_response(res.into(), tx, config).await, Ok(res) => handle_response(res.into(), tx, config, head_req).await,
Err(err) => { Err(err) => {
let res: Response<BoxBody> = err.into(); let res: Response<BoxBody> = err.into();
handle_response(res, tx, config).await handle_response(res, tx, config, head_req).await
} }
}; };
@ -152,26 +158,28 @@ where
}); });
} }
Poll::Ready(None) => return Poll::Ready(Ok(())), Poll::Ready(None) => return Poll::Ready(Ok(())),
Poll::Pending => match this.ping_pong.as_mut() { Poll::Pending => match this.ping_pong.as_mut() {
Some(ping_pong) => loop { Some(ping_pong) => loop {
if ping_pong.on_flight { if ping_pong.in_flight {
// When have on flight ping pong. poll pong and and keep alive timer. // When there is an in-flight ping-pong, poll pong and and keep-alive
// on success pong received update keep alive timer to determine the next timing of // timer. On successful pong received, update keep-alive timer to
// ping pong. // determine the next timing of ping pong.
match ping_pong.ping_pong.poll_pong(cx)? { match ping_pong.ping_pong.poll_pong(cx)? {
Poll::Ready(_) => { Poll::Ready(_) => {
ping_pong.on_flight = false; ping_pong.in_flight = false;
let dead_line = this.config.keep_alive_deadline().unwrap(); let dead_line = this.config.keep_alive_deadline().unwrap();
ping_pong.timer.as_mut().reset(dead_line.into()); ping_pong.timer.as_mut().reset(dead_line.into());
} }
Poll::Pending => { Poll::Pending => {
return ping_pong.timer.as_mut().poll(cx).map(|_| Ok(())) return ping_pong.timer.as_mut().poll(cx).map(|_| Ok(()));
} }
} }
} else { } else {
// When there is no on flight ping pong. keep alive timer is used to wait for next // When there is no in-flight ping-pong, keep-alive timer is used to
// timing of ping pong. Therefore at this point it serves as an interval instead. // wait for next timing of ping-pong. Therefore, at this point it serves
// as an interval instead.
ready!(ping_pong.timer.as_mut().poll(cx)); ready!(ping_pong.timer.as_mut().poll(cx));
ping_pong.ping_pong.send_ping(Ping::opaque())?; ping_pong.ping_pong.send_ping(Ping::opaque())?;
@ -179,7 +187,7 @@ where
let dead_line = this.config.keep_alive_deadline().unwrap(); let dead_line = this.config.keep_alive_deadline().unwrap();
ping_pong.timer.as_mut().reset(dead_line.into()); ping_pong.timer.as_mut().reset(dead_line.into());
ping_pong.on_flight = true; ping_pong.in_flight = true;
} }
}, },
None => return Poll::Pending, None => return Poll::Pending,
@ -199,6 +207,7 @@ async fn handle_response<B>(
res: Response<B>, res: Response<B>,
mut tx: SendResponse<Bytes>, mut tx: SendResponse<Bytes>,
config: ServiceConfig, config: ServiceConfig,
head_req: bool,
) -> Result<(), DispatchError> ) -> Result<(), DispatchError>
where where
B: MessageBody, B: MessageBody,
@ -208,14 +217,14 @@ where
// prepare response. // prepare response.
let mut size = body.size(); let mut size = body.size();
let res = prepare_response(config, res.head(), &mut size); let res = prepare_response(config, res.head(), &mut size);
let eof = size.is_eof(); let eof_or_head = size.is_eof() || head_req;
// send response head and return on eof. // send response head and return on eof.
let mut stream = tx let mut stream = tx
.send_response(res, eof) .send_response(res, eof_or_head)
.map_err(DispatchError::SendResponse)?; .map_err(DispatchError::SendResponse)?;
if eof { if eof_or_head {
return Ok(()); return Ok(());
} }
@ -287,13 +296,13 @@ fn prepare_response(
_ => {} _ => {}
} }
let _ = match size { match size {
BodySize::None | BodySize::Stream => None, BodySize::None | BodySize::Stream => {}
BodySize::Sized(0) => { BodySize::Sized(0) => {
#[allow(clippy::declare_interior_mutable_const)] #[allow(clippy::declare_interior_mutable_const)]
const HV_ZERO: HeaderValue = HeaderValue::from_static("0"); const HV_ZERO: HeaderValue = HeaderValue::from_static("0");
res.headers_mut().insert(CONTENT_LENGTH, HV_ZERO) res.headers_mut().insert(CONTENT_LENGTH, HV_ZERO);
} }
BodySize::Sized(len) => { BodySize::Sized(len) => {
@ -302,7 +311,7 @@ fn prepare_response(
res.headers_mut().insert( res.headers_mut().insert(
CONTENT_LENGTH, CONTENT_LENGTH,
HeaderValue::from_str(buf.format(*len)).unwrap(), HeaderValue::from_str(buf.format(*len)).unwrap(),
) );
} }
}; };

View File

@ -103,11 +103,9 @@ where
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::panic::{RefUnwindSafe, UnwindSafe};
use static_assertions::assert_impl_all; use static_assertions::assert_impl_all;
use super::*; use super::*;
assert_impl_all!(Payload: Unpin, Send, Sync, UnwindSafe, RefUnwindSafe); assert_impl_all!(Payload: Unpin, Send, Sync);
} }

View File

@ -117,6 +117,7 @@ mod openssl {
B: MessageBody + 'static, B: MessageBody + 'static,
{ {
/// Create OpenSSL based service. /// Create OpenSSL based service.
#[cfg_attr(docsrs, doc(cfg(feature = "openssl")))]
pub fn openssl( pub fn openssl(
self, self,
acceptor: SslAcceptor, acceptor: SslAcceptor,
@ -164,6 +165,7 @@ mod rustls {
B: MessageBody + 'static, B: MessageBody + 'static,
{ {
/// Create Rustls based service. /// Create Rustls based service.
#[cfg_attr(docsrs, doc(cfg(feature = "rustls")))]
pub fn rustls( pub fn rustls(
self, self,
mut config: ServerConfig, mut config: ServerConfig,

View File

@ -0,0 +1,51 @@
//! Common header names not defined in [`http`].
//!
//! Any headers added to this file will need to be re-exported from the list at `crate::headers`.
use http::header::HeaderName;
/// Response header field that indicates how caches have handled that response and its corresponding
/// request.
///
/// See [RFC 9211](https://www.rfc-editor.org/rfc/rfc9211) for full semantics.
pub const CACHE_STATUS: HeaderName = HeaderName::from_static("cache-status");
/// Response header field that allows origin servers to control the behavior of CDN caches
/// interposed between them and clients separately from other caches that might handle the response.
///
/// See [RFC 9213](https://www.rfc-editor.org/rfc/rfc9213) for full semantics.
pub const CDN_CACHE_CONTROL: HeaderName = HeaderName::from_static("cdn-cache-control");
/// Response header that prevents a document from loading any cross-origin resources that don't
/// explicitly grant the document permission (using [CORP] or [CORS]).
///
/// [CORP]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cross-Origin_Resource_Policy_(CORP)
/// [CORS]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
pub const CROSS_ORIGIN_EMBEDDER_POLICY: HeaderName =
HeaderName::from_static("cross-origin-embedder-policy");
/// Response header that allows you to ensure a top-level document does not share a browsing context
/// group with cross-origin documents.
pub const CROSS_ORIGIN_OPENER_POLICY: HeaderName =
HeaderName::from_static("cross-origin-opener-policy");
/// Response header that conveys a desire that the browser blocks no-cors cross-origin/cross-site
/// requests to the given resource.
pub const CROSS_ORIGIN_RESOURCE_POLICY: HeaderName =
HeaderName::from_static("cross-origin-resource-policy");
/// Response header that provides a mechanism to allow and deny the use of browser features in a
/// document or within any `<iframe>` elements in the document.
pub const PERMISSIONS_POLICY: HeaderName = HeaderName::from_static("permissions-policy");
/// Request header (de-facto standard) for identifying the originating IP address of a client
/// connecting to a web server through a proxy server.
pub const X_FORWARDED_FOR: HeaderName = HeaderName::from_static("x-forwarded-for");
/// Request header (de-facto standard) for identifying the original host requested by the client in
/// the `Host` HTTP request header.
pub const X_FORWARDED_HOST: HeaderName = HeaderName::from_static("x-forwarded-host");
/// Request header (de-facto standard) for identifying the protocol that a client used to connect to
/// your proxy or load balancer.
pub const X_FORWARDED_PROTO: HeaderName = HeaderName::from_static("x-forwarded-proto");

View File

@ -150,9 +150,7 @@ impl HeaderMap {
/// assert_eq!(map.len(), 3); /// assert_eq!(map.len(), 3);
/// ``` /// ```
pub fn len(&self) -> usize { pub fn len(&self) -> usize {
self.inner self.inner.values().map(|vals| vals.len()).sum()
.iter()
.fold(0, |acc, (_, values)| acc + values.len())
} }
/// Returns the number of _keys_ stored in the map. /// Returns the number of _keys_ stored in the map.
@ -309,7 +307,7 @@ impl HeaderMap {
pub fn get_all(&self, key: impl AsHeaderName) -> std::slice::Iter<'_, HeaderValue> { pub fn get_all(&self, key: impl AsHeaderName) -> std::slice::Iter<'_, HeaderValue> {
match self.get_value(key) { match self.get_value(key) {
Some(value) => value.iter(), Some(value) => value.iter(),
None => (&[]).iter(), None => [].iter(),
} }
} }
@ -552,6 +550,39 @@ impl HeaderMap {
Keys(self.inner.keys()) Keys(self.inner.keys())
} }
/// Retains only the headers specified by the predicate.
///
/// In other words, removes all headers `(name, val)` for which `retain_fn(&name, &mut val)`
/// returns false.
///
/// The order in which headers are visited should be considered arbitrary.
///
/// # Examples
/// ```
/// # use actix_http::header::{self, HeaderMap, HeaderValue};
/// let mut map = HeaderMap::new();
///
/// map.append(header::HOST, HeaderValue::from_static("duck.com"));
/// map.append(header::SET_COOKIE, HeaderValue::from_static("one=1"));
/// map.append(header::SET_COOKIE, HeaderValue::from_static("two=2"));
///
/// map.retain(|name, val| val.as_bytes().starts_with(b"one"));
///
/// assert_eq!(map.len(), 1);
/// assert!(map.contains_key(&header::SET_COOKIE));
/// ```
pub fn retain<F>(&mut self, mut retain_fn: F)
where
F: FnMut(&HeaderName, &mut HeaderValue) -> bool,
{
self.inner.retain(|name, vals| {
vals.inner.retain(|val| retain_fn(name, val));
// invariant: make sure newly empty value lists are removed
!vals.is_empty()
})
}
/// Clears the map, returning all name-value sets as an iterator. /// Clears the map, returning all name-value sets as an iterator.
/// ///
/// Header names will only be yielded for the first value in each set. All items that are /// Header names will only be yielded for the first value in each set. All items that are
@ -943,6 +974,55 @@ mod tests {
assert!(map.is_empty()); assert!(map.is_empty());
} }
#[test]
fn retain() {
let mut map = HeaderMap::new();
map.append(header::LOCATION, HeaderValue::from_static("/test"));
map.append(header::HOST, HeaderValue::from_static("duck.com"));
map.append(header::COOKIE, HeaderValue::from_static("one=1"));
map.append(header::COOKIE, HeaderValue::from_static("two=2"));
assert_eq!(map.len(), 4);
// by value
map.retain(|_, val| !val.as_bytes().contains(&b'/'));
assert_eq!(map.len(), 3);
// by name
map.retain(|name, _| name.as_str() != "cookie");
assert_eq!(map.len(), 1);
// keep but mutate value
map.retain(|_, val| {
*val = HeaderValue::from_static("replaced");
true
});
assert_eq!(map.len(), 1);
assert_eq!(map.get("host").unwrap(), "replaced");
}
#[test]
fn retain_removes_empty_value_lists() {
let mut map = HeaderMap::with_capacity(3);
map.append(header::HOST, HeaderValue::from_static("duck.com"));
map.append(header::HOST, HeaderValue::from_static("duck.com"));
assert_eq!(map.len(), 2);
assert_eq!(map.len_keys(), 1);
assert_eq!(map.inner.len(), 1);
assert_eq!(map.capacity(), 3);
// remove everything
map.retain(|_n, _v| false);
assert_eq!(map.len(), 0);
assert_eq!(map.len_keys(), 0);
assert_eq!(map.inner.len(), 0);
assert_eq!(map.capacity(), 3);
}
#[test] #[test]
fn entries_into_iter() { fn entries_into_iter() {
let mut map = HeaderMap::new(); let mut map = HeaderMap::new();

View File

@ -1,14 +1,18 @@
//! Pre-defined `HeaderName`s, traits for parsing and conversion, and other header utility methods. //! Pre-defined `HeaderName`s, traits for parsing and conversion, and other header utility methods.
// declaring new header consts will yield this error
#![allow(clippy::declare_interior_mutable_const)]
use percent_encoding::{AsciiSet, CONTROLS}; use percent_encoding::{AsciiSet, CONTROLS};
// re-export from http except header map related items // re-export from http except header map related items
pub use http::header::{ pub use ::http::header::{
HeaderName, HeaderValue, InvalidHeaderName, InvalidHeaderValue, ToStrError, HeaderName, HeaderValue, InvalidHeaderName, InvalidHeaderValue, ToStrError,
}; };
// re-export const header names // re-export const header names, list is explicit so that any updates to `common` module do not
pub use http::header::{ // conflict with this set
pub use ::http::header::{
ACCEPT, ACCEPT_CHARSET, ACCEPT_ENCODING, ACCEPT_LANGUAGE, ACCEPT_RANGES, ACCEPT, ACCEPT_CHARSET, ACCEPT_ENCODING, ACCEPT_LANGUAGE, ACCEPT_RANGES,
ACCESS_CONTROL_ALLOW_CREDENTIALS, ACCESS_CONTROL_ALLOW_HEADERS, ACCESS_CONTROL_ALLOW_CREDENTIALS, ACCESS_CONTROL_ALLOW_HEADERS,
ACCESS_CONTROL_ALLOW_METHODS, ACCESS_CONTROL_ALLOW_ORIGIN, ACCESS_CONTROL_EXPOSE_HEADERS, ACCESS_CONTROL_ALLOW_METHODS, ACCESS_CONTROL_ALLOW_ORIGIN, ACCESS_CONTROL_EXPOSE_HEADERS,
@ -30,22 +34,30 @@ pub use http::header::{
use crate::{error::ParseError, HttpMessage}; use crate::{error::ParseError, HttpMessage};
mod as_name; mod as_name;
mod common;
mod into_pair; mod into_pair;
mod into_value; mod into_value;
pub mod map; pub mod map;
mod shared; mod shared;
mod utils; mod utils;
pub use self::as_name::AsHeaderName; pub use self::{
pub use self::into_pair::TryIntoHeaderPair; as_name::AsHeaderName,
pub use self::into_value::TryIntoHeaderValue; into_pair::TryIntoHeaderPair,
pub use self::map::HeaderMap; into_value::TryIntoHeaderValue,
pub use self::shared::{ map::HeaderMap,
parse_extended_value, q, Charset, ContentEncoding, ExtendedValue, HttpDate, LanguageTag, shared::{
Quality, QualityItem, parse_extended_value, q, Charset, ContentEncoding, ExtendedValue, HttpDate,
LanguageTag, Quality, QualityItem,
},
utils::{fmt_comma_delimited, from_comma_delimited, from_one_raw_str, http_percent_encode},
}; };
pub use self::utils::{
fmt_comma_delimited, from_comma_delimited, from_one_raw_str, http_percent_encode, // re-export list is explicit so that any updates to `http` do not conflict with this set
pub use self::common::{
CACHE_STATUS, CDN_CACHE_CONTROL, CROSS_ORIGIN_EMBEDDER_POLICY, CROSS_ORIGIN_OPENER_POLICY,
CROSS_ORIGIN_RESOURCE_POLICY, PERMISSIONS_POLICY, X_FORWARDED_FOR, X_FORWARDED_HOST,
X_FORWARDED_PROTO,
}; };
/// An interface for types that already represent a valid header. /// An interface for types that already represent a valid header.

View File

@ -12,7 +12,7 @@ use crate::header::{Charset, HTTP_VALUE};
/// - A character sequence representing the actual value (`value`), separated by single quotes. /// - A character sequence representing the actual value (`value`), separated by single quotes.
/// ///
/// It is defined in [RFC 5987 §3.2](https://datatracker.ietf.org/doc/html/rfc5987#section-3.2). /// It is defined in [RFC 5987 §3.2](https://datatracker.ietf.org/doc/html/rfc5987#section-3.2).
#[derive(Clone, Debug, PartialEq)] #[derive(Clone, Debug, PartialEq, Eq)]
pub struct ExtendedValue { pub struct ExtendedValue {
/// The character set that is used to encode the `value` to a string. /// The character set that is used to encode the `value` to a string.
pub charset: Charset, pub charset: Charset,

View File

@ -147,7 +147,7 @@ mod tests {
// copy of encoding from actix-web headers // copy of encoding from actix-web headers
#[allow(clippy::enum_variant_names)] // allow Encoding prefix on EncodingExt #[allow(clippy::enum_variant_names)] // allow Encoding prefix on EncodingExt
#[derive(Clone, PartialEq, Debug)] #[derive(Debug, Clone, PartialEq, Eq)]
pub enum Encoding { pub enum Encoding {
Chunked, Chunked,
Brotli, Brotli,

View File

@ -21,10 +21,12 @@
#![allow( #![allow(
clippy::type_complexity, clippy::type_complexity,
clippy::too_many_arguments, clippy::too_many_arguments,
clippy::borrow_interior_mutable_const clippy::borrow_interior_mutable_const,
clippy::uninlined_format_args
)] )]
#![doc(html_logo_url = "https://actix.rs/img/logo.png")] #![doc(html_logo_url = "https://actix.rs/img/logo.png")]
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")] #![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
#![cfg_attr(docsrs, feature(doc_cfg))]
pub use ::http::{uri, uri::Uri}; pub use ::http::{uri, uri::Uri};
pub use ::http::{Method, StatusCode, Version}; pub use ::http::{Method, StatusCode, Version};
@ -39,6 +41,7 @@ pub mod error;
mod extensions; mod extensions;
pub mod h1; pub mod h1;
#[cfg(feature = "http2")] #[cfg(feature = "http2")]
#[cfg_attr(docsrs, doc(cfg(feature = "http2")))]
pub mod h2; pub mod h2;
pub mod header; pub mod header;
mod helpers; mod helpers;
@ -53,6 +56,7 @@ mod responses;
mod service; mod service;
pub mod test; pub mod test;
#[cfg(feature = "ws")] #[cfg(feature = "ws")]
#[cfg_attr(docsrs, doc(cfg(feature = "ws")))]
pub mod ws; pub mod ws;
pub use self::builder::HttpServiceBuilder; pub use self::builder::HttpServiceBuilder;
@ -69,6 +73,9 @@ pub use self::payload::{BoxedPayloadStream, Payload, PayloadStream};
pub use self::requests::{Request, RequestHead, RequestHeadType}; pub use self::requests::{Request, RequestHead, RequestHeadType};
pub use self::responses::{Response, ResponseBuilder, ResponseHead}; pub use self::responses::{Response, ResponseBuilder, ResponseHead};
pub use self::service::HttpService; pub use self::service::HttpService;
#[cfg(any(feature = "openssl", feature = "rustls"))]
#[cfg_attr(docsrs, doc(cfg(any(feature = "openssl", feature = "rustls"))))]
pub use self::service::TlsAcceptorConfig;
/// A major HTTP protocol version. /// A major HTTP protocol version.
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)] #[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]

View File

@ -3,7 +3,7 @@ use std::{cell::RefCell, ops, rc::Rc};
use bitflags::bitflags; use bitflags::bitflags;
/// Represents various types of connection /// Represents various types of connection
#[derive(Copy, Clone, PartialEq, Debug)] #[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ConnectionType { pub enum ConnectionType {
/// Close connection after response. /// Close connection after response.
Close, Close,

View File

@ -97,12 +97,10 @@ where
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::panic::{RefUnwindSafe, UnwindSafe};
use static_assertions::{assert_impl_all, assert_not_impl_any}; use static_assertions::{assert_impl_all, assert_not_impl_any};
use super::*; use super::*;
assert_impl_all!(Payload: Unpin); assert_impl_all!(Payload: Unpin);
assert_not_impl_any!(Payload: Send, Sync, UnwindSafe, RefUnwindSafe); assert_not_impl_any!(Payload: Send, Sync);
} }

View File

@ -113,14 +113,14 @@ impl<P> Request<P> {
#[inline] #[inline]
/// Http message part of the request /// Http message part of the request
pub fn head(&self) -> &RequestHead { pub fn head(&self) -> &RequestHead {
&*self.head &self.head
} }
#[inline] #[inline]
#[doc(hidden)] #[doc(hidden)]
/// Mutable reference to a HTTP message part of the request /// Mutable reference to a HTTP message part of the request
pub fn head_mut(&mut self) -> &mut RequestHead { pub fn head_mut(&mut self) -> &mut RequestHead {
&mut *self.head &mut self.head
} }
/// Mutable reference to the message's headers. /// Mutable reference to the message's headers.

View File

@ -237,7 +237,7 @@ mod tests {
.await; .await;
let mut stream = net::TcpStream::connect(srv.addr()).unwrap(); let mut stream = net::TcpStream::connect(srv.addr()).unwrap();
let _ = stream stream
.write_all(b"GET /camel HTTP/1.1\r\nConnection: Close\r\n\r\n") .write_all(b"GET /camel HTTP/1.1\r\nConnection: Close\r\n\r\n")
.unwrap(); .unwrap();
let mut data = vec![]; let mut data = vec![];
@ -251,7 +251,7 @@ mod tests {
assert!(memmem::find(&data, b"content-length").is_none()); assert!(memmem::find(&data, b"content-length").is_none());
let mut stream = net::TcpStream::connect(srv.addr()).unwrap(); let mut stream = net::TcpStream::connect(srv.addr()).unwrap();
let _ = stream stream
.write_all(b"GET /lower HTTP/1.1\r\nConnection: Close\r\n\r\n") .write_all(b"GET /lower HTTP/1.1\r\nConnection: Close\r\n\r\n")
.unwrap(); .unwrap();
let mut data = vec![]; let mut data = vec![];

View File

@ -83,13 +83,13 @@ impl<B> Response<B> {
/// Returns a reference to the head of this response. /// Returns a reference to the head of this response.
#[inline] #[inline]
pub fn head(&self) -> &ResponseHead { pub fn head(&self) -> &ResponseHead {
&*self.head &self.head
} }
/// Returns a mutable reference to the head of this response. /// Returns a mutable reference to the head of this response.
#[inline] #[inline]
pub fn head_mut(&mut self) -> &mut ResponseHead { pub fn head_mut(&mut self) -> &mut ResponseHead {
&mut *self.head &mut self.head
} }
/// Returns the status code of this response. /// Returns the status code of this response.

View File

@ -24,7 +24,39 @@ use crate::{
h1, ConnectCallback, OnConnectData, Protocol, Request, Response, ServiceConfig, h1, ConnectCallback, OnConnectData, Protocol, Request, Response, ServiceConfig,
}; };
/// A `ServiceFactory` for HTTP/1.1 or HTTP/2 protocol. /// A [`ServiceFactory`] for HTTP/1.1 and HTTP/2 connections.
///
/// Use [`build`](Self::build) to begin constructing service. Also see [`HttpServiceBuilder`].
///
/// # Automatic HTTP Version Selection
/// There are two ways to select the HTTP version of an incoming connection:
/// - One is to rely on the ALPN information that is provided when using a TLS (HTTPS); both
/// versions are supported automatically when using either of the `.rustls()` or `.openssl()`
/// finalizing methods.
/// - The other is to read the first few bytes of the TCP stream. This is the only viable approach
/// for supporting H2C, which allows the HTTP/2 protocol to work over plaintext connections. Use
/// the `.tcp_auto_h2c()` finalizing method to enable this behavior.
///
/// # Examples
/// ```
/// # use std::convert::Infallible;
/// use actix_http::{HttpService, Request, Response, StatusCode};
///
/// // this service would constructed in an actix_server::Server
///
/// # actix_rt::System::new().block_on(async {
/// HttpService::build()
/// // the builder finalizing method, other finalizers would not return an `HttpService`
/// .finish(|_req: Request| async move {
/// Ok::<_, Infallible>(
/// Response::build(StatusCode::OK).body("Hello!")
/// )
/// })
/// // the service finalizing method method
/// // you can use `.tcp_auto_h2c()`, `.rustls()`, or `.openssl()` instead of `.tcp()`
/// .tcp();
/// # })
/// ```
pub struct HttpService<T, S, B, X = h1::ExpectHandler, U = h1::UpgradeHandler> { pub struct HttpService<T, S, B, X = h1::ExpectHandler, U = h1::UpgradeHandler> {
srv: S, srv: S,
cfg: ServiceConfig, cfg: ServiceConfig,
@ -163,7 +195,9 @@ where
U::Error: fmt::Display + Into<Response<BoxBody>>, U::Error: fmt::Display + Into<Response<BoxBody>>,
U::InitError: fmt::Debug, U::InitError: fmt::Debug,
{ {
/// Create simple tcp stream service /// Creates TCP stream service from HTTP service.
///
/// The resulting service only supports HTTP/1.x.
pub fn tcp( pub fn tcp(
self, self,
) -> impl ServiceFactory< ) -> impl ServiceFactory<
@ -179,6 +213,61 @@ where
}) })
.and_then(self) .and_then(self)
} }
/// Creates TCP stream service from HTTP service that automatically selects HTTP/1.x or HTTP/2
/// on plaintext connections.
#[cfg(feature = "http2")]
#[cfg_attr(docsrs, doc(cfg(feature = "http2")))]
pub fn tcp_auto_h2c(
self,
) -> impl ServiceFactory<
TcpStream,
Config = (),
Response = (),
Error = DispatchError,
InitError = (),
> {
fn_service(move |io: TcpStream| async move {
// subset of HTTP/2 preface defined by RFC 9113 §3.4
// this subset was chosen to maximize likelihood that peeking only once will allow us to
// reliably determine version or else it should fallback to h1 and fail quickly if data
// on the wire is junk
const H2_PREFACE: &[u8] = b"PRI * HTTP/2";
let mut buf = [0; 12];
io.peek(&mut buf).await?;
let proto = if buf == H2_PREFACE {
Protocol::Http2
} else {
Protocol::Http1
};
let peer_addr = io.peer_addr().ok();
Ok((io, proto, peer_addr))
})
.and_then(self)
}
}
/// Configuration options used when accepting TLS connection.
#[cfg(any(feature = "openssl", feature = "rustls"))]
#[cfg_attr(docsrs, doc(cfg(any(feature = "openssl", feature = "rustls"))))]
#[derive(Debug, Default)]
pub struct TlsAcceptorConfig {
pub(crate) handshake_timeout: Option<std::time::Duration>,
}
#[cfg(any(feature = "openssl", feature = "rustls"))]
impl TlsAcceptorConfig {
/// Set TLS handshake timeout duration.
pub fn handshake_timeout(self, dur: std::time::Duration) -> Self {
Self {
handshake_timeout: Some(dur),
// ..self
}
}
} }
#[cfg(feature = "openssl")] #[cfg(feature = "openssl")]
@ -220,6 +309,7 @@ mod openssl {
U::InitError: fmt::Debug, U::InitError: fmt::Debug,
{ {
/// Create OpenSSL based service. /// Create OpenSSL based service.
#[cfg_attr(docsrs, doc(cfg(feature = "openssl")))]
pub fn openssl( pub fn openssl(
self, self,
acceptor: SslAcceptor, acceptor: SslAcceptor,
@ -230,7 +320,29 @@ mod openssl {
Error = TlsError<SslError, DispatchError>, Error = TlsError<SslError, DispatchError>,
InitError = (), InitError = (),
> { > {
Acceptor::new(acceptor) self.openssl_with_config(acceptor, TlsAcceptorConfig::default())
}
/// Create OpenSSL based service with custom TLS acceptor configuration.
#[cfg_attr(docsrs, doc(cfg(feature = "openssl")))]
pub fn openssl_with_config(
self,
acceptor: SslAcceptor,
tls_acceptor_config: TlsAcceptorConfig,
) -> impl ServiceFactory<
TcpStream,
Config = (),
Response = (),
Error = TlsError<SslError, DispatchError>,
InitError = (),
> {
let mut acceptor = Acceptor::new(acceptor);
if let Some(handshake_timeout) = tls_acceptor_config.handshake_timeout {
acceptor.set_handshake_timeout(handshake_timeout);
}
acceptor
.map_init_err(|_| { .map_init_err(|_| {
unreachable!("TLS acceptor service factory does not error on init") unreachable!("TLS acceptor service factory does not error on init")
}) })
@ -292,9 +404,26 @@ mod rustls {
U::InitError: fmt::Debug, U::InitError: fmt::Debug,
{ {
/// Create Rustls based service. /// Create Rustls based service.
#[cfg_attr(docsrs, doc(cfg(feature = "rustls")))]
pub fn rustls( pub fn rustls(
self,
config: ServerConfig,
) -> impl ServiceFactory<
TcpStream,
Config = (),
Response = (),
Error = TlsError<io::Error, DispatchError>,
InitError = (),
> {
self.rustls_with_config(config, TlsAcceptorConfig::default())
}
/// Create Rustls based service with custom TLS acceptor configuration.
#[cfg_attr(docsrs, doc(cfg(feature = "rustls")))]
pub fn rustls_with_config(
self, self,
mut config: ServerConfig, mut config: ServerConfig,
tls_acceptor_config: TlsAcceptorConfig,
) -> impl ServiceFactory< ) -> impl ServiceFactory<
TcpStream, TcpStream,
Config = (), Config = (),
@ -306,7 +435,13 @@ mod rustls {
protos.extend_from_slice(&config.alpn_protocols); protos.extend_from_slice(&config.alpn_protocols);
config.alpn_protocols = protos; config.alpn_protocols = protos;
Acceptor::new(config) let mut acceptor = Acceptor::new(config);
if let Some(handshake_timeout) = tls_acceptor_config.handshake_timeout {
acceptor.set_handshake_timeout(handshake_timeout);
}
acceptor
.map_init_err(|_| { .map_init_err(|_| {
unreachable!("TLS acceptor service factory does not error on init") unreachable!("TLS acceptor service factory does not error on init")
}) })

View File

@ -1,7 +1,7 @@
use actix_codec::{Decoder, Encoder};
use bitflags::bitflags; use bitflags::bitflags;
use bytes::{Bytes, BytesMut}; use bytes::{Bytes, BytesMut};
use bytestring::ByteString; use bytestring::ByteString;
use tokio_util::codec::{Decoder, Encoder};
use tracing::error; use tracing::error;
use super::{ use super::{
@ -11,7 +11,7 @@ use super::{
}; };
/// A WebSocket message. /// A WebSocket message.
#[derive(Debug, PartialEq)] #[derive(Debug, PartialEq, Eq)]
pub enum Message { pub enum Message {
/// Text message. /// Text message.
Text(ByteString), Text(ByteString),
@ -36,7 +36,7 @@ pub enum Message {
} }
/// A WebSocket frame. /// A WebSocket frame.
#[derive(Debug, PartialEq)] #[derive(Debug, PartialEq, Eq)]
pub enum Frame { pub enum Frame {
/// Text frame. Note that the codec does not validate UTF-8 encoding. /// Text frame. Note that the codec does not validate UTF-8 encoding.
Text(Bytes), Text(Bytes),
@ -58,7 +58,7 @@ pub enum Frame {
} }
/// A WebSocket continuation item. /// A WebSocket continuation item.
#[derive(Debug, PartialEq)] #[derive(Debug, PartialEq, Eq)]
pub enum Item { pub enum Item {
FirstText(Bytes), FirstText(Bytes),
FirstBinary(Bytes), FirstBinary(Bytes),

View File

@ -76,7 +76,9 @@ mod inner {
use pin_project_lite::pin_project; use pin_project_lite::pin_project;
use tracing::debug; use tracing::debug;
use actix_codec::{AsyncRead, AsyncWrite, Decoder, Encoder, Framed}; use actix_codec::Framed;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_util::codec::{Decoder, Encoder};
use crate::{body::BoxBody, Response}; use crate::{body::BoxBody, Response};

View File

@ -17,7 +17,6 @@ impl Parser {
fn parse_metadata( fn parse_metadata(
src: &[u8], src: &[u8],
server: bool, server: bool,
max_size: usize,
) -> Result<Option<(usize, bool, OpCode, usize, Option<[u8; 4]>)>, ProtocolError> { ) -> Result<Option<(usize, bool, OpCode, usize, Option<[u8; 4]>)>, ProtocolError> {
let chunk_len = src.len(); let chunk_len = src.len();
@ -60,20 +59,12 @@ impl Parser {
return Ok(None); return Ok(None);
} }
let len = u64::from_be_bytes(TryFrom::try_from(&src[idx..idx + 8]).unwrap()); let len = u64::from_be_bytes(TryFrom::try_from(&src[idx..idx + 8]).unwrap());
if len > max_size as u64 {
return Err(ProtocolError::Overflow);
}
idx += 8; idx += 8;
len as usize len as usize
} else { } else {
len as usize len as usize
}; };
// check for max allowed size
if length > max_size {
return Err(ProtocolError::Overflow);
}
let mask = if server { let mask = if server {
if chunk_len < idx + 4 { if chunk_len < idx + 4 {
return Ok(None); return Ok(None);
@ -98,11 +89,10 @@ impl Parser {
max_size: usize, max_size: usize,
) -> Result<Option<(bool, OpCode, Option<BytesMut>)>, ProtocolError> { ) -> Result<Option<(bool, OpCode, Option<BytesMut>)>, ProtocolError> {
// try to parse ws frame metadata // try to parse ws frame metadata
let (idx, finished, opcode, length, mask) = let (idx, finished, opcode, length, mask) = match Parser::parse_metadata(src, server)? {
match Parser::parse_metadata(src, server, max_size)? { None => return Ok(None),
None => return Ok(None), Some(res) => res,
Some(res) => res, };
};
// not enough data // not enough data
if src.len() < idx + length { if src.len() < idx + length {
@ -112,6 +102,13 @@ impl Parser {
// remove prefix // remove prefix
src.advance(idx); src.advance(idx);
// check for max allowed size
if length > max_size {
// drop the payload
src.advance(length);
return Err(ProtocolError::Overflow);
}
// no need for body // no need for body
if length == 0 { if length == 0 {
return Ok(Some((finished, opcode, None))); return Ok(Some((finished, opcode, None)));
@ -316,7 +313,7 @@ mod tests {
#[test] #[test]
fn test_parse_frame_no_mask() { fn test_parse_frame_no_mask() {
let mut buf = BytesMut::from(&[0b0000_0001u8, 0b0000_0001u8][..]); let mut buf = BytesMut::from(&[0b0000_0001u8, 0b0000_0001u8][..]);
buf.extend(&[1u8]); buf.extend([1u8]);
assert!(Parser::parse(&mut buf, true, 1024).is_err()); assert!(Parser::parse(&mut buf, true, 1024).is_err());
@ -329,7 +326,7 @@ mod tests {
#[test] #[test]
fn test_parse_frame_max_size() { fn test_parse_frame_max_size() {
let mut buf = BytesMut::from(&[0b0000_0001u8, 0b0000_0010u8][..]); let mut buf = BytesMut::from(&[0b0000_0001u8, 0b0000_0010u8][..]);
buf.extend(&[1u8, 1u8]); buf.extend([1u8, 1u8]);
assert!(Parser::parse(&mut buf, true, 1).is_err()); assert!(Parser::parse(&mut buf, true, 1).is_err());
@ -339,6 +336,30 @@ mod tests {
} }
} }
#[test]
fn test_parse_frame_max_size_recoverability() {
let mut buf = BytesMut::new();
// The first text frame with length == 2, payload doesn't matter.
buf.extend([0b0000_0001u8, 0b0000_0010u8, 0b0000_0000u8, 0b0000_0000u8]);
// Next binary frame with length == 2 and payload == `[0x1111_1111u8, 0x1111_1111u8]`.
buf.extend([0b0000_0010u8, 0b0000_0010u8, 0b1111_1111u8, 0b1111_1111u8]);
assert_eq!(buf.len(), 8);
assert!(matches!(
Parser::parse(&mut buf, false, 1),
Err(ProtocolError::Overflow)
));
assert_eq!(buf.len(), 4);
let frame = extract(Parser::parse(&mut buf, false, 2));
assert!(!frame.finished);
assert_eq!(frame.opcode, OpCode::Binary);
assert_eq!(
frame.payload,
Bytes::from(vec![0b1111_1111u8, 0b1111_1111u8])
);
assert_eq!(buf.len(), 0);
}
#[test] #[test]
fn test_ping_frame() { fn test_ping_frame() {
let mut buf = BytesMut::new(); let mut buf = BytesMut::new();

View File

@ -67,7 +67,7 @@ pub enum ProtocolError {
} }
/// WebSocket handshake errors /// WebSocket handshake errors
#[derive(Debug, Clone, Copy, PartialEq, Display, Error)] #[derive(Debug, Clone, Copy, PartialEq, Eq, Display, Error)]
pub enum HandshakeError { pub enum HandshakeError {
/// Only get method is allowed. /// Only get method is allowed.
#[display(fmt = "Method not allowed.")] #[display(fmt = "Method not allowed.")]

View File

@ -3,6 +3,7 @@ use std::{
fmt, fmt,
}; };
use base64::prelude::*;
use tracing::error; use tracing::error;
/// Operation codes defined in [RFC 6455 §11.8]. /// Operation codes defined in [RFC 6455 §11.8].
@ -244,7 +245,7 @@ pub fn hash_key(key: &[u8]) -> [u8; 28] {
}; };
let mut hash_b64 = [0; 28]; let mut hash_b64 = [0; 28];
let n = base64::encode_config_slice(&hash, base64::STANDARD, &mut hash_b64); let n = BASE64_STANDARD.encode_slice(hash, &mut hash_b64).unwrap();
assert_eq!(n, 28); assert_eq!(n, 28);
hash_b64 hash_b64

View File

@ -1,14 +1,15 @@
#![cfg(feature = "openssl")] #![cfg(feature = "openssl")]
#![allow(clippy::uninlined_format_args)]
extern crate tls_openssl as openssl; extern crate tls_openssl as openssl;
use std::{convert::Infallible, io}; use std::{convert::Infallible, io, time::Duration};
use actix_http::{ use actix_http::{
body::{BodyStream, BoxBody, SizedStream}, body::{BodyStream, BoxBody, SizedStream},
error::PayloadError, error::PayloadError,
header::{self, HeaderValue}, header::{self, HeaderValue},
Error, HttpService, Method, Request, Response, StatusCode, Version, Error, HttpService, Method, Request, Response, StatusCode, TlsAcceptorConfig, Version,
}; };
use actix_http_test::test_server; use actix_http_test::test_server;
use actix_service::{fn_service, ServiceFactoryExt}; use actix_service::{fn_service, ServiceFactoryExt};
@ -16,7 +17,7 @@ use actix_utils::future::{err, ok, ready};
use bytes::{Bytes, BytesMut}; use bytes::{Bytes, BytesMut};
use derive_more::{Display, Error}; use derive_more::{Display, Error};
use futures_core::Stream; use futures_core::Stream;
use futures_util::stream::{once, StreamExt as _}; use futures_util::{stream::once, StreamExt as _};
use openssl::{ use openssl::{
pkey::PKey, pkey::PKey,
ssl::{SslAcceptor, SslMethod}, ssl::{SslAcceptor, SslMethod},
@ -89,7 +90,10 @@ async fn h2_1() -> io::Result<()> {
assert_eq!(req.version(), Version::HTTP_2); assert_eq!(req.version(), Version::HTTP_2);
ok::<_, Error>(Response::ok()) ok::<_, Error>(Response::ok())
}) })
.openssl(tls_config()) .openssl_with_config(
tls_config(),
TlsAcceptorConfig::default().handshake_timeout(Duration::from_secs(5)),
)
.map_err(|_| ()) .map_err(|_| ())
}) })
.await; .await;

View File

@ -1,4 +1,5 @@
#![cfg(feature = "rustls")] #![cfg(feature = "rustls")]
#![allow(clippy::uninlined_format_args)]
extern crate tls_rustls as rustls; extern crate tls_rustls as rustls;
@ -8,13 +9,14 @@ use std::{
net::{SocketAddr, TcpStream as StdTcpStream}, net::{SocketAddr, TcpStream as StdTcpStream},
sync::Arc, sync::Arc,
task::Poll, task::Poll,
time::Duration,
}; };
use actix_http::{ use actix_http::{
body::{BodyStream, BoxBody, SizedStream}, body::{BodyStream, BoxBody, SizedStream},
error::PayloadError, error::PayloadError,
header::{self, HeaderName, HeaderValue}, header::{self, HeaderName, HeaderValue},
Error, HttpService, Method, Request, Response, StatusCode, Version, Error, HttpService, Method, Request, Response, StatusCode, TlsAcceptorConfig, Version,
}; };
use actix_http_test::test_server; use actix_http_test::test_server;
use actix_rt::pin; use actix_rt::pin;
@ -40,7 +42,7 @@ where
let body = stream.as_mut(); let body = stream.as_mut();
match ready!(body.poll_next(cx)) { match ready!(body.poll_next(cx)) {
Some(Ok(bytes)) => buf.extend_from_slice(&*bytes), Some(Ok(bytes)) => buf.extend_from_slice(&bytes),
None => return Poll::Ready(Ok(())), None => return Poll::Ready(Ok(())),
Some(Err(err)) => return Poll::Ready(Err(err)), Some(Err(err)) => return Poll::Ready(Err(err)),
} }
@ -160,7 +162,10 @@ async fn h2_1() -> io::Result<()> {
assert_eq!(req.version(), Version::HTTP_2); assert_eq!(req.version(), Version::HTTP_2);
ok::<_, Error>(Response::ok()) ok::<_, Error>(Response::ok())
}) })
.rustls(tls_config()) .rustls_with_config(
tls_config(),
TlsAcceptorConfig::default().handshake_timeout(Duration::from_secs(5)),
)
}) })
.await; .await;

View File

@ -1,3 +1,5 @@
#![allow(clippy::uninlined_format_args)]
use std::{ use std::{
convert::Infallible, convert::Infallible,
io::{Read, Write}, io::{Read, Write},
@ -7,18 +9,15 @@ use std::{
use actix_http::{ use actix_http::{
body::{self, BodyStream, BoxBody, SizedStream}, body::{self, BodyStream, BoxBody, SizedStream},
header, Error, HttpService, KeepAlive, Request, Response, StatusCode, header, Error, HttpService, KeepAlive, Request, Response, StatusCode, Version,
}; };
use actix_http_test::test_server; use actix_http_test::test_server;
use actix_rt::time::sleep; use actix_rt::{net::TcpStream, time::sleep};
use actix_service::fn_service; use actix_service::fn_service;
use actix_utils::future::{err, ok, ready}; use actix_utils::future::{err, ok, ready};
use bytes::Bytes; use bytes::Bytes;
use derive_more::{Display, Error}; use derive_more::{Display, Error};
use futures_util::{ use futures_util::{stream::once, FutureExt as _, StreamExt as _};
stream::{once, StreamExt as _},
FutureExt as _,
};
use regex::Regex; use regex::Regex;
#[actix_rt::test] #[actix_rt::test]
@ -858,3 +857,44 @@ async fn not_modified_spec_h1() {
srv.stop().await; srv.stop().await;
} }
#[actix_rt::test]
async fn h2c_auto() {
let mut srv = test_server(|| {
HttpService::build()
.keep_alive(KeepAlive::Disabled)
.finish(|req: Request| {
let body = match req.version() {
Version::HTTP_11 => "h1",
Version::HTTP_2 => "h2",
_ => unreachable!(),
};
ok::<_, Infallible>(Response::ok().set_body(body))
})
.tcp_auto_h2c()
})
.await;
let req = srv.get("/");
assert_eq!(req.get_version(), &Version::HTTP_11);
let mut res = req.send().await.unwrap();
assert!(res.status().is_success());
assert_eq!(res.body().await.unwrap(), &b"h1"[..]);
// awc doesn't support forcing the version to http/2 so use h2 manually
let tcp = TcpStream::connect(srv.addr()).await.unwrap();
let (h2, connection) = h2::client::handshake(tcp).await.unwrap();
tokio::spawn(async move { connection.await.unwrap() });
let mut h2 = h2.ready().await.unwrap();
let request = ::http::Request::new(());
let (response, _) = h2.send_request(request, true).unwrap();
let (head, mut body) = response.await.unwrap().into_parts();
let body = body.data().await.unwrap().unwrap();
assert!(head.status.is_success());
assert_eq!(body, &b"h2"[..]);
srv.stop().await;
}

View File

@ -1,3 +1,5 @@
#![allow(clippy::uninlined_format_args)]
use std::{ use std::{
cell::Cell, cell::Cell,
convert::Infallible, convert::Infallible,

View File

@ -1,7 +1,10 @@
# Changes # Changes
## Unreleased - 2021-xx-xx ## Unreleased - 2022-xx-xx
- Minimum supported Rust version (MSRV) is now 1.56 due to transitive `hashbrown` dependency. - Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
- `Field::content_type()` now returns `Option<&mime::Mime>` [#2880]
[#2880]: https://github.com/actix/actix-web/pull/2880
## 0.4.0 - 2022-02-25 ## 0.4.0 - 2022-02-25

View File

@ -19,16 +19,16 @@ actix-web = { version = "4", default-features = false }
bytes = "1" bytes = "1"
derive_more = "0.99.5" derive_more = "0.99.5"
futures-core = { version = "0.3.7", default-features = false, features = ["alloc"] } futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] }
httparse = "1.3" httparse = "1.3"
local-waker = "0.1" local-waker = "0.1"
log = "0.4" log = "0.4"
mime = "0.3" mime = "0.3"
twoway = "0.2" memchr = "2.5"
[dev-dependencies] [dev-dependencies]
actix-rt = "2.2" actix-rt = "2.2"
actix-http = "3.0.0" actix-http = "3"
futures-util = { version = "0.3.7", default-features = false, features = ["alloc"] } futures-util = { version = "0.3.17", default-features = false, features = ["alloc"] }
tokio = { version = "1.8.4", features = ["sync"] } tokio = { version = "1.18.4", features = ["sync"] }
tokio-stream = "0.1" tokio-stream = "0.1"

View File

@ -4,7 +4,7 @@
[![crates.io](https://img.shields.io/crates/v/actix-multipart?label=latest)](https://crates.io/crates/actix-multipart) [![crates.io](https://img.shields.io/crates/v/actix-multipart?label=latest)](https://crates.io/crates/actix-multipart)
[![Documentation](https://docs.rs/actix-multipart/badge.svg?version=0.4.0)](https://docs.rs/actix-multipart/0.4.0) [![Documentation](https://docs.rs/actix-multipart/badge.svg?version=0.4.0)](https://docs.rs/actix-multipart/0.4.0)
![Version](https://img.shields.io/badge/rustc-1.56+-ab6000.svg) ![Version](https://img.shields.io/badge/rustc-1.59+-ab6000.svg)
![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/actix-multipart.svg) ![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/actix-multipart.svg)
<br /> <br />
[![dependency status](https://deps.rs/crate/actix-multipart/0.4.0/status.svg)](https://deps.rs/crate/actix-multipart/0.4.0) [![dependency status](https://deps.rs/crate/actix-multipart/0.4.0/status.svg)](https://deps.rs/crate/actix-multipart/0.4.0)

View File

@ -14,7 +14,7 @@ use crate::server::Multipart;
/// ``` /// ```
/// use actix_web::{web, HttpResponse, Error}; /// use actix_web::{web, HttpResponse, Error};
/// use actix_multipart::Multipart; /// use actix_multipart::Multipart;
/// use futures_util::stream::StreamExt as _; /// use futures_util::StreamExt as _;
/// ///
/// async fn index(mut payload: Multipart) -> Result<HttpResponse, Error> { /// async fn index(mut payload: Multipart) -> Result<HttpResponse, Error> {
/// // iterate over multipart stream /// // iterate over multipart stream

View File

@ -2,7 +2,7 @@
#![deny(rust_2018_idioms, nonstandard_style)] #![deny(rust_2018_idioms, nonstandard_style)]
#![warn(future_incompatible)] #![warn(future_incompatible)]
#![allow(clippy::borrow_interior_mutable_const)] #![allow(clippy::borrow_interior_mutable_const, clippy::uninlined_format_args)]
mod error; mod error;
mod extractor; mod extractor;

View File

@ -289,10 +289,8 @@ impl InnerMultipart {
match self.state { match self.state {
// read until first boundary // read until first boundary
InnerState::FirstBoundary => { InnerState::FirstBoundary => {
match InnerMultipart::skip_until_boundary( match InnerMultipart::skip_until_boundary(&mut payload, &self.boundary)?
&mut *payload, {
&self.boundary,
)? {
Some(eof) => { Some(eof) => {
if eof { if eof {
self.state = InnerState::Eof; self.state = InnerState::Eof;
@ -306,7 +304,7 @@ impl InnerMultipart {
} }
// read boundary // read boundary
InnerState::Boundary => { InnerState::Boundary => {
match InnerMultipart::read_boundary(&mut *payload, &self.boundary)? { match InnerMultipart::read_boundary(&mut payload, &self.boundary)? {
None => return Poll::Pending, None => return Poll::Pending,
Some(eof) => { Some(eof) => {
if eof { if eof {
@ -323,7 +321,7 @@ impl InnerMultipart {
// read field headers for next field // read field headers for next field
if self.state == InnerState::Headers { if self.state == InnerState::Headers {
if let Some(headers) = InnerMultipart::read_headers(&mut *payload)? { if let Some(headers) = InnerMultipart::read_headers(&mut payload)? {
self.state = InnerState::Boundary; self.state = InnerState::Boundary;
headers headers
} else { } else {
@ -361,17 +359,18 @@ impl InnerMultipart {
return Poll::Ready(Some(Err(MultipartError::NoContentDisposition))); return Poll::Ready(Some(Err(MultipartError::NoContentDisposition)));
}; };
let ct: mime::Mime = headers let ct: Option<mime::Mime> = headers
.get(&header::CONTENT_TYPE) .get(&header::CONTENT_TYPE)
.and_then(|ct| ct.to_str().ok()) .and_then(|ct| ct.to_str().ok())
.and_then(|ct| ct.parse().ok()) .and_then(|ct| ct.parse().ok());
.unwrap_or(mime::APPLICATION_OCTET_STREAM);
self.state = InnerState::Boundary; self.state = InnerState::Boundary;
// nested multipart stream is not supported // nested multipart stream is not supported
if ct.type_() == mime::MULTIPART { if let Some(mime) = &ct {
return Poll::Ready(Some(Err(MultipartError::Nested))); if mime.type_() == mime::MULTIPART {
return Poll::Ready(Some(Err(MultipartError::Nested)));
}
} }
let field = let field =
@ -399,7 +398,7 @@ impl Drop for InnerMultipart {
/// A single field in a multipart stream /// A single field in a multipart stream
pub struct Field { pub struct Field {
ct: mime::Mime, ct: Option<mime::Mime>,
cd: ContentDisposition, cd: ContentDisposition,
headers: HeaderMap, headers: HeaderMap,
inner: Rc<RefCell<InnerField>>, inner: Rc<RefCell<InnerField>>,
@ -410,7 +409,7 @@ impl Field {
fn new( fn new(
safety: Safety, safety: Safety,
headers: HeaderMap, headers: HeaderMap,
ct: mime::Mime, ct: Option<mime::Mime>,
cd: ContentDisposition, cd: ContentDisposition,
inner: Rc<RefCell<InnerField>>, inner: Rc<RefCell<InnerField>>,
) -> Self { ) -> Self {
@ -428,9 +427,13 @@ impl Field {
&self.headers &self.headers
} }
/// Returns a reference to the field's content (mime) type. /// Returns a reference to the field's content (mime) type, if it is supplied by the client.
pub fn content_type(&self) -> &mime::Mime { ///
&self.ct /// According to [RFC 7578](https://www.rfc-editor.org/rfc/rfc7578#section-4.4), if it is not
/// present, it should default to "text/plain". Note it is the responsibility of the client to
/// provide the appropriate content type, there is no attempt to validate this by the server.
pub fn content_type(&self) -> Option<&mime::Mime> {
self.ct.as_ref()
} }
/// Returns the field's Content-Disposition. /// Returns the field's Content-Disposition.
@ -482,7 +485,11 @@ impl Stream for Field {
impl fmt::Debug for Field { impl fmt::Debug for Field {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
writeln!(f, "\nField: {}", self.ct)?; if let Some(ct) = &self.ct {
writeln!(f, "\nField: {}", ct)?;
} else {
writeln!(f, "\nField:")?;
}
writeln!(f, " boundary: {}", self.inner.borrow().boundary)?; writeln!(f, " boundary: {}", self.inner.borrow().boundary)?;
writeln!(f, " headers:")?; writeln!(f, " headers:")?;
for (key, val) in self.headers.iter() { for (key, val) in self.headers.iter() {
@ -599,7 +606,7 @@ impl InnerField {
} }
loop { loop {
return if let Some(idx) = twoway::find_bytes(&payload.buf[pos..], b"\r") { return if let Some(idx) = memchr::memmem::find(&payload.buf[pos..], b"\r") {
let cur = pos + idx; let cur = pos + idx;
// check if we have enough data for boundary detection // check if we have enough data for boundary detection
@ -643,9 +650,9 @@ impl InnerField {
let result = if let Some(mut payload) = self.payload.as_ref().unwrap().get_mut(s) { let result = if let Some(mut payload) = self.payload.as_ref().unwrap().get_mut(s) {
if !self.eof { if !self.eof {
let res = if let Some(ref mut len) = self.length { let res = if let Some(ref mut len) = self.length {
InnerField::read_len(&mut *payload, len) InnerField::read_len(&mut payload, len)
} else { } else {
InnerField::read_stream(&mut *payload, &self.boundary) InnerField::read_stream(&mut payload, &self.boundary)
}; };
match res { match res {
@ -820,7 +827,7 @@ impl PayloadBuffer {
/// Read until specified ending /// Read until specified ending
fn read_until(&mut self, line: &[u8]) -> Result<Option<Bytes>, MultipartError> { fn read_until(&mut self, line: &[u8]) -> Result<Option<Bytes>, MultipartError> {
let res = twoway::find_bytes(&self.buf, line) let res = memchr::memmem::find(&self.buf, line)
.map(|idx| self.buf.split_to(idx + line.len()).freeze()); .map(|idx| self.buf.split_to(idx + line.len()).freeze());
if res.is_none() && self.eof { if res.is_none() && self.eof {
@ -861,7 +868,7 @@ mod tests {
use actix_web::test::TestRequest; use actix_web::test::TestRequest;
use actix_web::FromRequest; use actix_web::FromRequest;
use bytes::Bytes; use bytes::Bytes;
use futures_util::{future::lazy, StreamExt}; use futures_util::{future::lazy, StreamExt as _};
use std::time::Duration; use std::time::Duration;
use tokio::sync::mpsc; use tokio::sync::mpsc;
use tokio_stream::wrappers::UnboundedReceiverStream; use tokio_stream::wrappers::UnboundedReceiverStream;
@ -1024,8 +1031,8 @@ mod tests {
assert_eq!(cd.disposition, DispositionType::FormData); assert_eq!(cd.disposition, DispositionType::FormData);
assert_eq!(cd.parameters[0], DispositionParam::Name("file".into())); assert_eq!(cd.parameters[0], DispositionParam::Name("file".into()));
assert_eq!(field.content_type().type_(), mime::TEXT); assert_eq!(field.content_type().unwrap().type_(), mime::TEXT);
assert_eq!(field.content_type().subtype(), mime::PLAIN); assert_eq!(field.content_type().unwrap().subtype(), mime::PLAIN);
match field.next().await.unwrap() { match field.next().await.unwrap() {
Ok(chunk) => assert_eq!(chunk, "test"), Ok(chunk) => assert_eq!(chunk, "test"),
@ -1041,8 +1048,8 @@ mod tests {
match multipart.next().await.unwrap() { match multipart.next().await.unwrap() {
Ok(mut field) => { Ok(mut field) => {
assert_eq!(field.content_type().type_(), mime::TEXT); assert_eq!(field.content_type().unwrap().type_(), mime::TEXT);
assert_eq!(field.content_type().subtype(), mime::PLAIN); assert_eq!(field.content_type().unwrap().subtype(), mime::PLAIN);
match field.next().await { match field.next().await {
Some(Ok(chunk)) => assert_eq!(chunk, "data"), Some(Ok(chunk)) => assert_eq!(chunk, "data"),
@ -1086,8 +1093,8 @@ mod tests {
assert_eq!(cd.disposition, DispositionType::FormData); assert_eq!(cd.disposition, DispositionType::FormData);
assert_eq!(cd.parameters[0], DispositionParam::Name("file".into())); assert_eq!(cd.parameters[0], DispositionParam::Name("file".into()));
assert_eq!(field.content_type().type_(), mime::TEXT); assert_eq!(field.content_type().unwrap().type_(), mime::TEXT);
assert_eq!(field.content_type().subtype(), mime::PLAIN); assert_eq!(field.content_type().unwrap().subtype(), mime::PLAIN);
assert_eq!(get_whole_field(&mut field).await, "test"); assert_eq!(get_whole_field(&mut field).await, "test");
} }
@ -1096,8 +1103,8 @@ mod tests {
match multipart.next().await { match multipart.next().await {
Some(Ok(mut field)) => { Some(Ok(mut field)) => {
assert_eq!(field.content_type().type_(), mime::TEXT); assert_eq!(field.content_type().unwrap().type_(), mime::TEXT);
assert_eq!(field.content_type().subtype(), mime::PLAIN); assert_eq!(field.content_type().unwrap().subtype(), mime::PLAIN);
assert_eq!(get_whole_field(&mut field).await, "data"); assert_eq!(get_whole_field(&mut field).await, "data");
} }

View File

@ -1,7 +1,13 @@
# Changes # Changes
## Unreleased - 2021-xx-xx ## Unreleased - 2022-xx-xx
- Minimum supported Rust version (MSRV) is now 1.56 due to transitive `hashbrown` dependency.
## 0.5.1 - 2022-09-19
- Correct typo in error string for `i32` deserialization. [#2876]
- Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
[#2876]: https://github.com/actix/actix-web/pull/2876
## 0.5.0 - 2022-02-22 ## 0.5.0 - 2022-02-22

View File

@ -1,6 +1,6 @@
[package] [package]
name = "actix-router" name = "actix-router"
version = "0.5.0" version = "0.5.1"
authors = [ authors = [
"Nikolay Kim <fafhrd91@gmail.com>", "Nikolay Kim <fafhrd91@gmail.com>",
"Ali MJ Al-Nasrawy <alimjalnasrawy@gmail.com>", "Ali MJ Al-Nasrawy <alimjalnasrawy@gmail.com>",
@ -21,13 +21,13 @@ default = ["http"]
[dependencies] [dependencies]
bytestring = ">=0.1.5, <2" bytestring = ">=0.1.5, <2"
http = { version = "0.2.3", optional = true } http = { version = "0.2.5", optional = true }
regex = "1.5" regex = "1.5"
serde = "1" serde = "1"
tracing = { version = "0.1.30", default-features = false, features = ["log"] } tracing = { version = "0.1.30", default-features = false, features = ["log"] }
[dev-dependencies] [dev-dependencies]
criterion = { version = "0.3", features = ["html_reports"] } criterion = { version = "0.4", features = ["html_reports"] }
http = "0.2.5" http = "0.2.5"
serde = { version = "1", features = ["derive"] } serde = { version = "1", features = ["derive"] }
percent-encoding = "2.1" percent-encoding = "2.1"

View File

@ -1,3 +1,5 @@
#![allow(clippy::uninlined_format_args)]
use criterion::{black_box, criterion_group, criterion_main, Criterion}; use criterion::{black_box, criterion_group, criterion_main, Criterion};
use std::borrow::Cow; use std::borrow::Cow;

View File

@ -293,7 +293,7 @@ impl<'de> Deserializer<'de> for Value<'de> {
parse_value!(deserialize_bool, visit_bool, "bool"); parse_value!(deserialize_bool, visit_bool, "bool");
parse_value!(deserialize_i8, visit_i8, "i8"); parse_value!(deserialize_i8, visit_i8, "i8");
parse_value!(deserialize_i16, visit_i16, "i16"); parse_value!(deserialize_i16, visit_i16, "i16");
parse_value!(deserialize_i32, visit_i32, "i16"); parse_value!(deserialize_i32, visit_i32, "i32");
parse_value!(deserialize_i64, visit_i64, "i64"); parse_value!(deserialize_i64, visit_i64, "i64");
parse_value!(deserialize_u8, visit_u8, "u8"); parse_value!(deserialize_u8, visit_u8, "u8");
parse_value!(deserialize_u16, visit_u16, "u16"); parse_value!(deserialize_u16, visit_u16, "u16");

View File

@ -2,6 +2,7 @@
#![deny(rust_2018_idioms, nonstandard_style)] #![deny(rust_2018_idioms, nonstandard_style)]
#![warn(future_incompatible)] #![warn(future_incompatible)]
#![allow(clippy::uninlined_format_args)]
#![doc(html_logo_url = "https://actix.rs/img/logo.png")] #![doc(html_logo_url = "https://actix.rs/img/logo.png")]
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")] #![doc(html_favicon_url = "https://actix.rs/favicon.ico")]

View File

@ -242,6 +242,7 @@ mod tests {
use super::*; use super::*;
#[allow(clippy::needless_borrow)]
#[test] #[test]
fn deref_impls() { fn deref_impls() {
let mut foo = Path::new("/foo"); let mut foo = Path::new("/foo");

View File

@ -649,7 +649,7 @@ impl ResourceDef {
/// resource.capture_match_info_fn( /// resource.capture_match_info_fn(
/// path, /// path,
/// // when env var is not set, reject when path contains "admin" /// // when env var is not set, reject when path contains "admin"
/// |res| !(!admin_allowed && res.path().contains("admin")), /// |path| !(!admin_allowed && path.as_str().contains("admin")),
/// ) /// )
/// } /// }
/// ///
@ -1503,31 +1503,31 @@ mod tests {
fn build_path_list() { fn build_path_list() {
let mut s = String::new(); let mut s = String::new();
let resource = ResourceDef::new("/user/{item1}/test"); let resource = ResourceDef::new("/user/{item1}/test");
assert!(resource.resource_path_from_iter(&mut s, &mut (&["user1"]).iter())); assert!(resource.resource_path_from_iter(&mut s, &mut ["user1"].iter()));
assert_eq!(s, "/user/user1/test"); assert_eq!(s, "/user/user1/test");
let mut s = String::new(); let mut s = String::new();
let resource = ResourceDef::new("/user/{item1}/{item2}/test"); let resource = ResourceDef::new("/user/{item1}/{item2}/test");
assert!(resource.resource_path_from_iter(&mut s, &mut (&["item", "item2"]).iter())); assert!(resource.resource_path_from_iter(&mut s, &mut ["item", "item2"].iter()));
assert_eq!(s, "/user/item/item2/test"); assert_eq!(s, "/user/item/item2/test");
let mut s = String::new(); let mut s = String::new();
let resource = ResourceDef::new("/user/{item1}/{item2}"); let resource = ResourceDef::new("/user/{item1}/{item2}");
assert!(resource.resource_path_from_iter(&mut s, &mut (&["item", "item2"]).iter())); assert!(resource.resource_path_from_iter(&mut s, &mut ["item", "item2"].iter()));
assert_eq!(s, "/user/item/item2"); assert_eq!(s, "/user/item/item2");
let mut s = String::new(); let mut s = String::new();
let resource = ResourceDef::new("/user/{item1}/{item2}/"); let resource = ResourceDef::new("/user/{item1}/{item2}/");
assert!(resource.resource_path_from_iter(&mut s, &mut (&["item", "item2"]).iter())); assert!(resource.resource_path_from_iter(&mut s, &mut ["item", "item2"].iter()));
assert_eq!(s, "/user/item/item2/"); assert_eq!(s, "/user/item/item2/");
let mut s = String::new(); let mut s = String::new();
assert!(!resource.resource_path_from_iter(&mut s, &mut (&["item"]).iter())); assert!(!resource.resource_path_from_iter(&mut s, &mut ["item"].iter()));
let mut s = String::new(); let mut s = String::new();
assert!(resource.resource_path_from_iter(&mut s, &mut (&["item", "item2"]).iter())); assert!(resource.resource_path_from_iter(&mut s, &mut ["item", "item2"].iter()));
assert_eq!(s, "/user/item/item2/"); assert_eq!(s, "/user/item/item2/");
assert!(!resource.resource_path_from_iter(&mut s, &mut (&["item"]).iter())); assert!(!resource.resource_path_from_iter(&mut s, &mut ["item"].iter()));
let mut s = String::new(); let mut s = String::new();
assert!(resource.resource_path_from_iter(&mut s, &mut vec!["item", "item2"].iter())); assert!(resource.resource_path_from_iter(&mut s, &mut vec!["item", "item2"].iter()));
@ -1604,10 +1604,10 @@ mod tests {
let resource = ResourceDef::new("/user/{item1}*"); let resource = ResourceDef::new("/user/{item1}*");
let mut s = String::new(); let mut s = String::new();
assert!(!resource.resource_path_from_iter(&mut s, &mut (&[""; 0]).iter())); assert!(!resource.resource_path_from_iter(&mut s, &mut [""; 0].iter()));
let mut s = String::new(); let mut s = String::new();
assert!(resource.resource_path_from_iter(&mut s, &mut (&["user1"]).iter())); assert!(resource.resource_path_from_iter(&mut s, &mut ["user1"].iter()));
assert_eq!(s, "/user/user1"); assert_eq!(s, "/user/user1");
let mut s = String::new(); let mut s = String::new();

View File

@ -27,7 +27,7 @@ impl<'a> ResourcePath for &'a str {
impl ResourcePath for bytestring::ByteString { impl ResourcePath for bytestring::ByteString {
fn path(&self) -> &str { fn path(&self) -> &str {
&*self self
} }
} }

View File

@ -1,6 +1,6 @@
use crate::{IntoPatterns, Resource, ResourceDef}; use crate::{IntoPatterns, Resource, ResourceDef};
#[derive(Debug, Copy, Clone, PartialEq)] #[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub struct ResourceId(pub u16); pub struct ResourceId(pub u16);
/// Resource router. /// Resource router.

View File

@ -1,7 +1,11 @@
# Changes # Changes
## Unreleased - 2021-xx-xx ## Unreleased - 2022-xx-xx
- Minimum supported Rust version (MSRV) is now 1.56 due to transitive `hashbrown` dependency. - Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
## 0.1.0 - 2022-07-24
- Minimum supported Rust version (MSRV) is now 1.57 due to transitive `time` dependency.
## 0.1.0-beta.13 - 2022-02-16 ## 0.1.0-beta.13 - 2022-02-16

View File

@ -1,6 +1,6 @@
[package] [package]
name = "actix-test" name = "actix-test"
version = "0.1.0-beta.13" version = "0.1.0"
authors = [ authors = [
"Nikolay Kim <fafhrd91@gmail.com>", "Nikolay Kim <fafhrd91@gmail.com>",
"Rob Ede <robjtede@icloud.com>", "Rob Ede <robjtede@icloud.com>",
@ -30,19 +30,19 @@ openssl = ["tls-openssl", "actix-http/openssl", "awc/openssl"]
[dependencies] [dependencies]
actix-codec = "0.5" actix-codec = "0.5"
actix-http = "3" actix-http = "3"
actix-http-test = "3.0.0-beta.13" actix-http-test = "3"
actix-rt = "2.1" actix-rt = "2.1"
actix-service = "2" actix-service = "2"
actix-utils = "3" actix-utils = "3"
actix-web = { version = "4", default-features = false, features = ["cookies"] } actix-web = { version = "4", default-features = false, features = ["cookies"] }
awc = { version = "3", default-features = false, features = ["cookies"] } awc = { version = "3", default-features = false, features = ["cookies"] }
futures-core = { version = "0.3.7", default-features = false, features = ["std"] } futures-core = { version = "0.3.17", default-features = false, features = ["std"] }
futures-util = { version = "0.3.7", default-features = false, features = [] } futures-util = { version = "0.3.17", default-features = false, features = [] }
log = "0.4" log = "0.4"
serde = { version = "1", features = ["derive"] } serde = { version = "1", features = ["derive"] }
serde_json = "1" serde_json = "1"
serde_urlencoded = "0.7" serde_urlencoded = "0.7"
tls-openssl = { package = "openssl", version = "0.10.9", optional = true } tls-openssl = { package = "openssl", version = "0.10.9", optional = true }
tls-rustls = { package = "rustls", version = "0.20.0", optional = true } tls-rustls = { package = "rustls", version = "0.20.0", optional = true }
tokio = { version = "1.8.4", features = ["sync"] } tokio = { version = "1.18.4", features = ["sync"] }

View File

@ -321,6 +321,7 @@ where
// all thread managed resources should be dropped at this point // all thread managed resources should be dropped at this point
}); });
#[allow(clippy::let_underscore_future)]
let _ = thread_stop_tx.send(()); let _ = thread_stop_tx.send(());
}); });
@ -567,6 +568,7 @@ impl Drop for TestServer {
// without needing to await anything // without needing to await anything
// signal server to stop // signal server to stop
#[allow(clippy::let_underscore_future)]
let _ = self.server.stop(true); let _ = self.server.stop(true);
// signal system to stop // signal system to stop

View File

@ -1,7 +1,7 @@
# Changes # Changes
## Unreleased - 2021-xx-xx ## Unreleased - 2022-xx-xx
- Minimum supported Rust version (MSRV) is now 1.56 due to transitive `hashbrown` dependency. - Minimum supported Rust version (MSRV) is now 1.57 due to transitive `time` dependency.
## 4.1.0 - 2022-03-02 ## 4.1.0 - 2022-03-02

View File

@ -21,14 +21,18 @@ actix-web = { version = "4", default-features = false }
bytes = "1" bytes = "1"
bytestring = "1" bytestring = "1"
futures-core = { version = "0.3.7", default-features = false } futures-core = { version = "0.3.17", default-features = false }
pin-project-lite = "0.2" pin-project-lite = "0.2"
tokio = { version = "1.13.1", features = ["sync"] } tokio = { version = "1.18.4", features = ["sync"] }
tokio-util = { version = "0.7", features = ["codec"] }
[dev-dependencies] [dev-dependencies]
actix-rt = "2.2" actix-rt = "2.2"
actix-test = "0.1.0-beta.13" actix-test = "0.1"
awc = { version = "3", default-features = false } awc = { version = "3", default-features = false }
actix-web = { version = "4", features = ["macros"] }
mime = "0.3"
env_logger = "0.9" env_logger = "0.9"
futures-util = { version = "0.3.7", default-features = false } futures-util = { version = "0.3.17", default-features = false }

View File

@ -4,7 +4,7 @@
[![crates.io](https://img.shields.io/crates/v/actix-web-actors?label=latest)](https://crates.io/crates/actix-web-actors) [![crates.io](https://img.shields.io/crates/v/actix-web-actors?label=latest)](https://crates.io/crates/actix-web-actors)
[![Documentation](https://docs.rs/actix-web-actors/badge.svg?version=4.1.0)](https://docs.rs/actix-web-actors/4.1.0) [![Documentation](https://docs.rs/actix-web-actors/badge.svg?version=4.1.0)](https://docs.rs/actix-web-actors/4.1.0)
![Version](https://img.shields.io/badge/rustc-1.56+-ab6000.svg) ![Version](https://img.shields.io/badge/rustc-1.59+-ab6000.svg)
![License](https://img.shields.io/crates/l/actix-web-actors.svg) ![License](https://img.shields.io/crates/l/actix-web-actors.svg)
<br /> <br />
[![dependency status](https://deps.rs/crate/actix-web-actors/4.1.0/status.svg)](https://deps.rs/crate/actix-web-actors/4.1.0) [![dependency status](https://deps.rs/crate/actix-web-actors/4.1.0/status.svg)](https://deps.rs/crate/actix-web-actors/4.1.0)

View File

@ -14,6 +14,58 @@ use futures_core::Stream;
use tokio::sync::oneshot::Sender; use tokio::sync::oneshot::Sender;
/// Execution context for HTTP actors /// Execution context for HTTP actors
///
/// # Example
///
/// A demonstration of [server-sent events](https://developer.mozilla.org/docs/Web/API/Server-sent_events) using actors:
///
/// ```no_run
/// use std::time::Duration;
///
/// use actix::{Actor, AsyncContext};
/// use actix_web::{get, http::header, App, HttpResponse, HttpServer};
/// use actix_web_actors::HttpContext;
/// use bytes::Bytes;
///
/// struct MyActor {
/// count: usize,
/// }
///
/// impl Actor for MyActor {
/// type Context = HttpContext<Self>;
///
/// fn started(&mut self, ctx: &mut Self::Context) {
/// ctx.run_later(Duration::from_millis(100), Self::write);
/// }
/// }
///
/// impl MyActor {
/// fn write(&mut self, ctx: &mut HttpContext<Self>) {
/// self.count += 1;
/// if self.count > 3 {
/// ctx.write_eof()
/// } else {
/// ctx.write(Bytes::from(format!("event: count\ndata: {}\n\n", self.count)));
/// ctx.run_later(Duration::from_millis(100), Self::write);
/// }
/// }
/// }
///
/// #[get("/")]
/// async fn index() -> HttpResponse {
/// HttpResponse::Ok()
/// .insert_header(header::ContentType(mime::TEXT_EVENT_STREAM))
/// .streaming(HttpContext::create(MyActor { count: 0 }))
/// }
///
/// #[actix_web::main]
/// async fn main() -> std::io::Result<()> {
/// HttpServer::new(|| App::new().service(index))
/// .bind(("127.0.0.1", 8080))?
/// .run()
/// .await
/// }
/// ```
pub struct HttpContext<A> pub struct HttpContext<A>
where where
A: Actor<Context = HttpContext<A>>, A: Actor<Context = HttpContext<A>>,
@ -210,7 +262,7 @@ mod tests {
type Context = HttpContext<Self>; type Context = HttpContext<Self>;
fn started(&mut self, ctx: &mut Self::Context) { fn started(&mut self, ctx: &mut Self::Context) {
ctx.run_later(Duration::from_millis(100), |slf, ctx| slf.write(ctx)); ctx.run_later(Duration::from_millis(100), Self::write);
} }
} }
@ -221,7 +273,7 @@ mod tests {
ctx.write_eof() ctx.write_eof()
} else { } else {
ctx.write(Bytes::from(format!("LINE-{}", self.count))); ctx.write(Bytes::from(format!("LINE-{}", self.count)));
ctx.run_later(Duration::from_millis(100), |slf, ctx| slf.write(ctx)); ctx.run_later(Duration::from_millis(100), Self::write);
} }
} }
} }

View File

@ -1,7 +1,63 @@
//! Actix actors support for Actix Web. //! Actix actors support for Actix Web.
//!
//! # Examples
//!
//! ```no_run
//! use actix::{Actor, StreamHandler};
//! use actix_web::{get, web, App, Error, HttpRequest, HttpResponse, HttpServer};
//! use actix_web_actors::ws;
//!
//! /// Define Websocket actor
//! struct MyWs;
//!
//! impl Actor for MyWs {
//! type Context = ws::WebsocketContext<Self>;
//! }
//!
//! /// Handler for ws::Message message
//! impl StreamHandler<Result<ws::Message, ws::ProtocolError>> for MyWs {
//! fn handle(&mut self, msg: Result<ws::Message, ws::ProtocolError>, ctx: &mut Self::Context) {
//! match msg {
//! Ok(ws::Message::Ping(msg)) => ctx.pong(&msg),
//! Ok(ws::Message::Text(text)) => ctx.text(text),
//! Ok(ws::Message::Binary(bin)) => ctx.binary(bin),
//! _ => (),
//! }
//! }
//! }
//!
//! #[get("/ws")]
//! async fn index(req: HttpRequest, stream: web::Payload) -> Result<HttpResponse, Error> {
//! ws::start(MyWs, &req, stream)
//! }
//!
//! #[actix_web::main]
//! async fn main() -> std::io::Result<()> {
//! HttpServer::new(|| App::new().service(index))
//! .bind(("127.0.0.1", 8080))?
//! .run()
//! .await
//! }
//! ```
//!
//! # Documentation & Community Resources
//! In addition to this API documentation, several other resources are available:
//!
//! * [Website & User Guide](https://actix.rs/)
//! * [Documentation for `actix_web`](actix_web)
//! * [Examples Repository](https://github.com/actix/examples)
//! * [Community Chat on Discord](https://discord.gg/NWpN5mmg3x)
//!
//! To get started navigating the API docs, you may consider looking at the following pages first:
//!
//! * [`ws`]: This module provides actor support for WebSockets.
//!
//! * [`HttpContext`]: This struct provides actor support for streaming HTTP responses.
//!
#![deny(rust_2018_idioms, nonstandard_style)] #![deny(rust_2018_idioms, nonstandard_style)]
#![warn(future_incompatible)] #![warn(future_incompatible)]
#![allow(clippy::uninlined_format_args)]
mod context; mod context;
pub mod ws; pub mod ws;

View File

@ -1,4 +1,60 @@
//! Websocket integration. //! Websocket integration.
//!
//! # Examples
//!
//! ```no_run
//! use actix::{Actor, StreamHandler};
//! use actix_web::{get, web, App, Error, HttpRequest, HttpResponse, HttpServer};
//! use actix_web_actors::ws;
//!
//! /// Define Websocket actor
//! struct MyWs;
//!
//! impl Actor for MyWs {
//! type Context = ws::WebsocketContext<Self>;
//! }
//!
//! /// Handler for ws::Message message
//! impl StreamHandler<Result<ws::Message, ws::ProtocolError>> for MyWs {
//! fn handle(&mut self, msg: Result<ws::Message, ws::ProtocolError>, ctx: &mut Self::Context) {
//! match msg {
//! Ok(ws::Message::Ping(msg)) => ctx.pong(&msg),
//! Ok(ws::Message::Text(text)) => ctx.text(text),
//! Ok(ws::Message::Binary(bin)) => ctx.binary(bin),
//! _ => (),
//! }
//! }
//! }
//!
//! #[get("/ws")]
//! async fn websocket(req: HttpRequest, stream: web::Payload) -> Result<HttpResponse, Error> {
//! ws::start(MyWs, &req, stream)
//! }
//!
//! const MAX_FRAME_SIZE: usize = 16_384; // 16KiB
//!
//! #[get("/custom-ws")]
//! async fn custom_websocket(req: HttpRequest, stream: web::Payload) -> Result<HttpResponse, Error> {
//! // Create a Websocket session with a specific max frame size, and protocols.
//! ws::WsResponseBuilder::new(MyWs, &req, stream)
//! .frame_size(MAX_FRAME_SIZE)
//! .protocols(&["A", "B"])
//! .start()
//! }
//!
//! #[actix_web::main]
//! async fn main() -> std::io::Result<()> {
//! HttpServer::new(|| {
//! App::new()
//! .service(websocket)
//! .service(custom_websocket)
//! })
//! .bind(("127.0.0.1", 8080))?
//! .run()
//! .await
//! }
//! ```
//!
use std::{ use std::{
collections::VecDeque, collections::VecDeque,
@ -18,7 +74,6 @@ use actix::{
Actor, ActorContext, ActorState, Addr, AsyncContext, Handler, Message as ActixMessage, Actor, ActorContext, ActorState, Addr, AsyncContext, Handler, Message as ActixMessage,
SpawnHandle, SpawnHandle,
}; };
use actix_codec::{Decoder as _, Encoder as _};
use actix_http::ws::{hash_key, Codec}; use actix_http::ws::{hash_key, Codec};
pub use actix_http::ws::{ pub use actix_http::ws::{
CloseCode, CloseReason, Frame, HandshakeError, Message, ProtocolError, CloseCode, CloseReason, Frame, HandshakeError, Message, ProtocolError,
@ -36,25 +91,57 @@ use bytestring::ByteString;
use futures_core::Stream; use futures_core::Stream;
use pin_project_lite::pin_project; use pin_project_lite::pin_project;
use tokio::sync::oneshot; use tokio::sync::oneshot;
use tokio_util::codec::{Decoder as _, Encoder as _};
/// Builder for Websocket session response. /// Builder for Websocket session response.
/// ///
/// # Examples /// # Examples
/// ///
/// Create a Websocket session response with default configuration. /// ```no_run
/// ```ignore /// # use actix::{Actor, StreamHandler};
/// WsResponseBuilder::new(WsActor, &req, stream).start() /// # use actix_web::{get, web, App, Error, HttpRequest, HttpResponse, HttpServer};
/// ``` /// # use actix_web_actors::ws;
/// #
/// # struct MyWs;
/// #
/// # impl Actor for MyWs {
/// # type Context = ws::WebsocketContext<Self>;
/// # }
/// #
/// # /// Handler for ws::Message message
/// # impl StreamHandler<Result<ws::Message, ws::ProtocolError>> for MyWs {
/// # fn handle(&mut self, msg: Result<ws::Message, ws::ProtocolError>, ctx: &mut Self::Context) {}
/// # }
/// #
/// #[get("/ws")]
/// async fn websocket(req: HttpRequest, stream: web::Payload) -> Result<HttpResponse, Error> {
/// ws::WsResponseBuilder::new(MyWs, &req, stream).start()
/// }
/// ///
/// Create a Websocket session with a specific max frame size, [`Codec`], and protocols.
/// ```ignore
/// const MAX_FRAME_SIZE: usize = 16_384; // 16KiB /// const MAX_FRAME_SIZE: usize = 16_384; // 16KiB
/// ///
/// ws::WsResponseBuilder::new(WsActor, &req, stream) /// #[get("/custom-ws")]
/// .codec(Codec::new()) /// async fn custom_websocket(req: HttpRequest, stream: web::Payload) -> Result<HttpResponse, Error> {
/// .protocols(&["A", "B"]) /// // Create a Websocket session with a specific max frame size, codec, and protocols.
/// .frame_size(MAX_FRAME_SIZE) /// ws::WsResponseBuilder::new(MyWs, &req, stream)
/// .start() /// .codec(actix_http::ws::Codec::new())
/// // This will overwrite the codec's max frame-size
/// .frame_size(MAX_FRAME_SIZE)
/// .protocols(&["A", "B"])
/// .start()
/// }
/// #
/// # #[actix_web::main]
/// # async fn main() -> std::io::Result<()> {
/// # HttpServer::new(|| {
/// # App::new()
/// # .service(websocket)
/// # .service(custom_websocket)
/// # })
/// # .bind(("127.0.0.1", 8080))?
/// # .run()
/// # .await
/// # }
/// ``` /// ```
pub struct WsResponseBuilder<'a, A, T> pub struct WsResponseBuilder<'a, A, T>
where where

View File

@ -3,7 +3,7 @@ use actix_http::ws::Codec;
use actix_web::{web, App, HttpRequest}; use actix_web::{web, App, HttpRequest};
use actix_web_actors::ws; use actix_web_actors::ws;
use bytes::Bytes; use bytes::Bytes;
use futures_util::{SinkExt, StreamExt}; use futures_util::{SinkExt as _, StreamExt as _};
struct Ws; struct Ws;

View File

@ -1,6 +1,13 @@
# Changes # Changes
## Unreleased - 2021-xx-xx ## Unreleased - 2022-xx-xx
## 4.1.0 - 2022-09-11
- Add `#[routes]` macro to support multiple paths for one handler. [#2718]
- Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
[#2718]: https://github.com/actix/actix-web/pull/2718
## 4.0.1 - 2022-06-11 ## 4.0.1 - 2022-06-11

View File

@ -1,6 +1,6 @@
[package] [package]
name = "actix-web-codegen" name = "actix-web-codegen"
version = "4.0.1" version = "4.1.0"
description = "Routing and runtime macros for Actix Web" description = "Routing and runtime macros for Actix Web"
homepage = "https://actix.rs" homepage = "https://actix.rs"
repository = "https://github.com/actix/actix-web.git" repository = "https://github.com/actix/actix-web.git"
@ -15,18 +15,18 @@ edition = "2018"
proc-macro = true proc-macro = true
[dependencies] [dependencies]
actix-router = "0.5.0" actix-router = "0.5"
proc-macro2 = "1" proc-macro2 = "1"
quote = "1" quote = "1"
syn = { version = "1", features = ["full", "parsing"] } syn = { version = "1", features = ["full", "extra-traits"] }
[dev-dependencies] [dev-dependencies]
actix-macros = "0.2.3" actix-macros = "0.2.3"
actix-rt = "2.2" actix-rt = "2.2"
actix-test = "0.1.0-beta.13" actix-test = "0.1"
actix-utils = "3.0.0" actix-utils = "3"
actix-web = "4.0.0" actix-web = "4"
futures-core = { version = "0.3.7", default-features = false, features = ["alloc"] } futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] }
trybuild = "1" trybuild = "1"
rustversion = "1" rustversion = "1"

View File

@ -3,11 +3,11 @@
> Routing and runtime macros for Actix Web. > Routing and runtime macros for Actix Web.
[![crates.io](https://img.shields.io/crates/v/actix-web-codegen?label=latest)](https://crates.io/crates/actix-web-codegen) [![crates.io](https://img.shields.io/crates/v/actix-web-codegen?label=latest)](https://crates.io/crates/actix-web-codegen)
[![Documentation](https://docs.rs/actix-web-codegen/badge.svg?version=4.0.1)](https://docs.rs/actix-web-codegen/4.0.1) [![Documentation](https://docs.rs/actix-web-codegen/badge.svg?version=4.1.0)](https://docs.rs/actix-web-codegen/4.1.0)
![Version](https://img.shields.io/badge/rustc-1.56+-ab6000.svg) ![Version](https://img.shields.io/badge/rustc-1.59+-ab6000.svg)
![License](https://img.shields.io/crates/l/actix-web-codegen.svg) ![License](https://img.shields.io/crates/l/actix-web-codegen.svg)
<br /> <br />
[![dependency status](https://deps.rs/crate/actix-web-codegen/4.0.1/status.svg)](https://deps.rs/crate/actix-web-codegen/4.0.1) [![dependency status](https://deps.rs/crate/actix-web-codegen/4.1.0/status.svg)](https://deps.rs/crate/actix-web-codegen/4.1.0)
[![Download](https://img.shields.io/crates/d/actix-web-codegen.svg)](https://crates.io/crates/actix-web-codegen) [![Download](https://img.shields.io/crates/d/actix-web-codegen.svg)](https://crates.io/crates/actix-web-codegen)
[![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x) [![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x)

View File

@ -46,9 +46,20 @@
//! ``` //! ```
//! //!
//! # Multiple Path Handlers //! # Multiple Path Handlers
//! There are no macros to generate multi-path handlers. Let us know in [this issue]. //! Acts as a wrapper for multiple single method handler macros. It takes no arguments and
//! delegates those to the macros for the individual methods. See [macro@routes] macro docs.
//! //!
//! [this issue]: https://github.com/actix/actix-web/issues/1709 //! ```
//! # use actix_web::HttpResponse;
//! # use actix_web_codegen::routes;
//! #[routes]
//! #[get("/test")]
//! #[get("/test2")]
//! #[delete("/test")]
//! async fn example() -> HttpResponse {
//! HttpResponse::Ok().finish()
//! }
//! ```
//! //!
//! [actix-web attributes docs]: https://docs.rs/actix-web/latest/actix_web/#attributes //! [actix-web attributes docs]: https://docs.rs/actix-web/latest/actix_web/#attributes
//! [GET]: macro@get //! [GET]: macro@get
@ -104,6 +115,39 @@ pub fn route(args: TokenStream, input: TokenStream) -> TokenStream {
route::with_method(None, args, input) route::with_method(None, args, input)
} }
/// Creates resource handler, allowing multiple HTTP methods and paths.
///
/// # Syntax
/// ```plain
/// #[routes]
/// #[<method>("path", ...)]
/// #[<method>("path", ...)]
/// ...
/// ```
///
/// # Attributes
/// The `routes` macro itself has no parameters, but allows specifying the attribute macros for
/// the multiple paths and/or methods, e.g. [`GET`](macro@get) and [`POST`](macro@post).
///
/// These helper attributes take the same parameters as the [single method handlers](crate#single-method-handler).
///
/// # Examples
/// ```
/// # use actix_web::HttpResponse;
/// # use actix_web_codegen::routes;
/// #[routes]
/// #[get("/test")]
/// #[get("/test2")]
/// #[delete("/test")]
/// async fn example() -> HttpResponse {
/// HttpResponse::Ok().finish()
/// }
/// ```
#[proc_macro_attribute]
pub fn routes(_: TokenStream, input: TokenStream) -> TokenStream {
route::with_methods(input)
}
macro_rules! method_macro { macro_rules! method_macro {
($variant:ident, $method:ident) => { ($variant:ident, $method:ident) => {
#[doc = concat!("Creates route handler with `actix_web::guard::", stringify!($variant), "`.")] #[doc = concat!("Creates route handler with `actix_web::guard::", stringify!($variant), "`.")]

View File

@ -3,24 +3,12 @@ use std::{collections::HashSet, convert::TryFrom};
use actix_router::ResourceDef; use actix_router::ResourceDef;
use proc_macro::TokenStream; use proc_macro::TokenStream;
use proc_macro2::{Span, TokenStream as TokenStream2}; use proc_macro2::{Span, TokenStream as TokenStream2};
use quote::{format_ident, quote, ToTokens, TokenStreamExt}; use quote::{quote, ToTokens, TokenStreamExt};
use syn::{parse_macro_input, AttributeArgs, Ident, LitStr, NestedMeta, Path}; use syn::{parse_macro_input, AttributeArgs, Ident, LitStr, Meta, NestedMeta, Path};
enum ResourceType {
Async,
Sync,
}
impl ToTokens for ResourceType {
fn to_tokens(&self, stream: &mut TokenStream2) {
let ident = format_ident!("to");
stream.append(ident);
}
}
macro_rules! method_type { macro_rules! method_type {
( (
$($variant:ident, $upper:ident,)+ $($variant:ident, $upper:ident, $lower:ident,)+
) => { ) => {
#[derive(Debug, PartialEq, Eq, Hash)] #[derive(Debug, PartialEq, Eq, Hash)]
pub enum MethodType { pub enum MethodType {
@ -42,20 +30,27 @@ macro_rules! method_type {
_ => Err(format!("Unexpected HTTP method: `{}`", method)), _ => Err(format!("Unexpected HTTP method: `{}`", method)),
} }
} }
fn from_path(method: &Path) -> Result<Self, ()> {
match () {
$(_ if method.is_ident(stringify!($lower)) => Ok(Self::$variant),)+
_ => Err(()),
}
}
} }
}; };
} }
method_type! { method_type! {
Get, GET, Get, GET, get,
Post, POST, Post, POST, post,
Put, PUT, Put, PUT, put,
Delete, DELETE, Delete, DELETE, delete,
Head, HEAD, Head, HEAD, head,
Connect, CONNECT, Connect, CONNECT, connect,
Options, OPTIONS, Options, OPTIONS, options,
Trace, TRACE, Trace, TRACE, trace,
Patch, PATCH, Patch, PATCH, patch,
} }
impl ToTokens for MethodType { impl ToTokens for MethodType {
@ -90,6 +85,18 @@ impl Args {
let mut wrappers = Vec::new(); let mut wrappers = Vec::new();
let mut methods = HashSet::new(); let mut methods = HashSet::new();
if args.is_empty() {
return Err(syn::Error::new(
Span::call_site(),
format!(
r#"invalid service definition, expected #[{}("<path>")]"#,
method
.map_or("route", |it| it.as_str())
.to_ascii_lowercase()
),
));
}
let is_route_macro = method.is_none(); let is_route_macro = method.is_none();
if let Some(method) = method { if let Some(method) = method {
methods.insert(method); methods.insert(method);
@ -148,7 +155,7 @@ impl Args {
if !methods.insert(method) { if !methods.insert(method) {
return Err(syn::Error::new_spanned( return Err(syn::Error::new_spanned(
&nv.lit, &nv.lit,
&format!( format!(
"HTTP method defined more than once: `{}`", "HTTP method defined more than once: `{}`",
lit.value() lit.value()
), ),
@ -183,55 +190,27 @@ impl Args {
} }
pub struct Route { pub struct Route {
/// Name of the handler function being annotated.
name: syn::Ident, name: syn::Ident,
args: Args,
/// Args passed to routing macro.
///
/// When using `#[routes]`, this will contain args for each specific routing macro.
args: Vec<Args>,
/// AST of the handler function being annotated.
ast: syn::ItemFn, ast: syn::ItemFn,
resource_type: ResourceType,
/// The doc comment attributes to copy to generated struct, if any. /// The doc comment attributes to copy to generated struct, if any.
doc_attributes: Vec<syn::Attribute>, doc_attributes: Vec<syn::Attribute>,
} }
fn guess_resource_type(typ: &syn::Type) -> ResourceType {
let mut guess = ResourceType::Sync;
if let syn::Type::ImplTrait(typ) = typ {
for bound in typ.bounds.iter() {
if let syn::TypeParamBound::Trait(bound) = bound {
for bound in bound.path.segments.iter() {
if bound.ident == "Future" {
guess = ResourceType::Async;
break;
} else if bound.ident == "Responder" {
guess = ResourceType::Sync;
break;
}
}
}
}
}
guess
}
impl Route { impl Route {
pub fn new( pub fn new(
args: AttributeArgs, args: AttributeArgs,
ast: syn::ItemFn, ast: syn::ItemFn,
method: Option<MethodType>, method: Option<MethodType>,
) -> syn::Result<Self> { ) -> syn::Result<Self> {
if args.is_empty() {
return Err(syn::Error::new(
Span::call_site(),
format!(
r#"invalid service definition, expected #[{}("<some path>")]"#,
method
.map_or("route", |it| it.as_str())
.to_ascii_lowercase()
),
));
}
let name = ast.sig.ident.clone(); let name = ast.sig.ident.clone();
// Try and pull out the doc comments so that we can reapply them to the generated struct. // Try and pull out the doc comments so that we can reapply them to the generated struct.
@ -244,6 +223,7 @@ impl Route {
.collect(); .collect();
let args = Args::new(args, method)?; let args = Args::new(args, method)?;
if args.methods.is_empty() { if args.methods.is_empty() {
return Err(syn::Error::new( return Err(syn::Error::new(
Span::call_site(), Span::call_site(),
@ -251,25 +231,44 @@ impl Route {
)); ));
} }
let resource_type = if ast.sig.asyncness.is_some() { if matches!(ast.sig.output, syn::ReturnType::Default) {
ResourceType::Async return Err(syn::Error::new_spanned(
} else { ast,
match ast.sig.output { "Function has no return type. Cannot be used as handler",
syn::ReturnType::Default => { ));
return Err(syn::Error::new_spanned( }
ast,
"Function has no return type. Cannot be used as handler", Ok(Self {
)); name,
} args: vec![args],
syn::ReturnType::Type(_, ref typ) => guess_resource_type(typ.as_ref()), ast,
} doc_attributes,
}; })
}
fn multiple(args: Vec<Args>, ast: syn::ItemFn) -> syn::Result<Self> {
let name = ast.sig.ident.clone();
// Try and pull out the doc comments so that we can reapply them to the generated struct.
// Note that multi line doc comments are converted to multiple doc attributes.
let doc_attributes = ast
.attrs
.iter()
.filter(|attr| attr.path.is_ident("doc"))
.cloned()
.collect();
if matches!(ast.sig.output, syn::ReturnType::Default) {
return Err(syn::Error::new_spanned(
ast,
"Function has no return type. Cannot be used as handler",
));
}
Ok(Self { Ok(Self {
name, name,
args, args,
ast, ast,
resource_type,
doc_attributes, doc_attributes,
}) })
} }
@ -280,38 +279,57 @@ impl ToTokens for Route {
let Self { let Self {
name, name,
ast, ast,
args: args,
Args { doc_attributes,
} = self;
let registrations: TokenStream2 = args
.iter()
.map(|args| {
let Args {
path, path,
resource_name, resource_name,
guards, guards,
wrappers, wrappers,
methods, methods,
}, } = args;
resource_type,
doc_attributes, let resource_name = resource_name
} = self; .as_ref()
let resource_name = resource_name .map_or_else(|| name.to_string(), LitStr::value);
.as_ref()
.map_or_else(|| name.to_string(), LitStr::value); let method_guards = {
let method_guards = { let mut others = methods.iter();
let mut others = methods.iter();
// unwrapping since length is checked to be at least one // unwrapping since length is checked to be at least one
let first = others.next().unwrap(); let first = others.next().unwrap();
if methods.len() > 1 {
quote! {
.guard(
::actix_web::guard::Any(::actix_web::guard::#first())
#(.or(::actix_web::guard::#others()))*
)
}
} else {
quote! {
.guard(::actix_web::guard::#first())
}
}
};
if methods.len() > 1 {
quote! { quote! {
.guard( let __resource = ::actix_web::Resource::new(#path)
::actix_web::guard::Any(::actix_web::guard::#first()) .name(#resource_name)
#(.or(::actix_web::guard::#others()))* #method_guards
) #(.guard(::actix_web::guard::fn_guard(#guards)))*
#(.wrap(#wrappers))*
.to(#name);
::actix_web::dev::HttpServiceFactory::register(__resource, __config);
} }
} else { })
quote! { .collect();
.guard(::actix_web::guard::#first())
}
}
};
let stream = quote! { let stream = quote! {
#(#doc_attributes)* #(#doc_attributes)*
@ -321,14 +339,7 @@ impl ToTokens for Route {
impl ::actix_web::dev::HttpServiceFactory for #name { impl ::actix_web::dev::HttpServiceFactory for #name {
fn register(self, __config: &mut actix_web::dev::AppService) { fn register(self, __config: &mut actix_web::dev::AppService) {
#ast #ast
let __resource = ::actix_web::Resource::new(#path) #registrations
.name(#resource_name)
#method_guards
#(.guard(::actix_web::guard::fn_guard(#guards)))*
#(.wrap(#wrappers))*
.#resource_type(#name);
::actix_web::dev::HttpServiceFactory::register(__resource, __config)
} }
} }
}; };
@ -357,6 +368,57 @@ pub(crate) fn with_method(
} }
} }
pub(crate) fn with_methods(input: TokenStream) -> TokenStream {
let mut ast = match syn::parse::<syn::ItemFn>(input.clone()) {
Ok(ast) => ast,
// on parse error, make IDEs happy; see fn docs
Err(err) => return input_and_compile_error(input, err),
};
let (methods, others) = ast
.attrs
.into_iter()
.map(|attr| match MethodType::from_path(&attr.path) {
Ok(method) => Ok((method, attr)),
Err(_) => Err(attr),
})
.partition::<Vec<_>, _>(Result::is_ok);
ast.attrs = others.into_iter().map(Result::unwrap_err).collect();
let methods =
match methods
.into_iter()
.map(Result::unwrap)
.map(|(method, attr)| {
attr.parse_meta().and_then(|args| {
if let Meta::List(args) = args {
Args::new(args.nested.into_iter().collect(), Some(method))
} else {
Err(syn::Error::new_spanned(attr, "Invalid input for macro"))
}
})
})
.collect::<Result<Vec<_>, _>>()
{
Ok(methods) if methods.is_empty() => return input_and_compile_error(
input,
syn::Error::new(
Span::call_site(),
"The #[routes] macro requires at least one `#[<method>(..)]` attribute.",
),
),
Ok(methods) => methods,
Err(err) => return input_and_compile_error(input, err),
};
match Route::multiple(methods, ast) {
Ok(route) => route.into_token_stream().into(),
// on macro related error, make IDEs happy; see fn docs
Err(err) => input_and_compile_error(input, err),
}
}
/// Converts the error to a token stream and appends it to the original input. /// Converts the error to a token stream and appends it to the original input.
/// ///
/// Returning the original input in addition to the error is good for IDEs which can gracefully /// Returning the original input in addition to the error is good for IDEs which can gracefully

View File

@ -8,9 +8,11 @@ use actix_web::{
header::{HeaderName, HeaderValue}, header::{HeaderName, HeaderValue},
StatusCode, StatusCode,
}, },
web, App, Error, HttpResponse, Responder, web, App, Error, HttpRequest, HttpResponse, Responder,
};
use actix_web_codegen::{
connect, delete, get, head, options, patch, post, put, route, routes, trace,
}; };
use actix_web_codegen::{connect, delete, get, head, options, patch, post, put, route, trace};
use futures_core::future::LocalBoxFuture; use futures_core::future::LocalBoxFuture;
// Make sure that we can name function as 'config' // Make sure that we can name function as 'config'
@ -89,8 +91,41 @@ async fn route_test() -> impl Responder {
HttpResponse::Ok() HttpResponse::Ok()
} }
#[routes]
#[get("/routes/test")]
#[get("/routes/test2")]
#[post("/routes/test")]
async fn routes_test() -> impl Responder {
HttpResponse::Ok()
}
// routes overlap with the more specific route first, therefore accessible
#[routes]
#[get("/routes/overlap/test")]
#[get("/routes/overlap/{foo}")]
async fn routes_overlapping_test(req: HttpRequest) -> impl Responder {
// foo is only populated when route is not /routes/overlap/test
match req.match_info().get("foo") {
None => assert!(req.uri() == "/routes/overlap/test"),
Some(_) => assert!(req.uri() != "/routes/overlap/test"),
}
HttpResponse::Ok()
}
// routes overlap with the more specific route last, therefore inaccessible
#[routes]
#[get("/routes/overlap2/{foo}")]
#[get("/routes/overlap2/test")]
async fn routes_overlapping_inaccessible_test(req: HttpRequest) -> impl Responder {
// foo is always populated even when path is /routes/overlap2/test
assert!(req.match_info().get("foo").is_some());
HttpResponse::Ok()
}
#[get("/custom_resource_name", name = "custom")] #[get("/custom_resource_name", name = "custom")]
async fn custom_resource_name_test<'a>(req: actix_web::HttpRequest) -> impl Responder { async fn custom_resource_name_test<'a>(req: HttpRequest) -> impl Responder {
assert!(req.url_for_static("custom").is_ok()); assert!(req.url_for_static("custom").is_ok());
assert!(req.url_for_static("custom_resource_name_test").is_err()); assert!(req.url_for_static("custom_resource_name_test").is_err());
HttpResponse::Ok() HttpResponse::Ok()
@ -201,6 +236,9 @@ async fn test_body() {
.service(patch_test) .service(patch_test)
.service(test_handler) .service(test_handler)
.service(route_test) .service(route_test)
.service(routes_overlapping_test)
.service(routes_overlapping_inaccessible_test)
.service(routes_test)
.service(custom_resource_name_test) .service(custom_resource_name_test)
.service(guard_test) .service(guard_test)
}); });
@ -258,6 +296,38 @@ async fn test_body() {
let response = request.send().await.unwrap(); let response = request.send().await.unwrap();
assert!(!response.status().is_success()); assert!(!response.status().is_success());
let request = srv.request(http::Method::GET, srv.url("/routes/test"));
let response = request.send().await.unwrap();
assert!(response.status().is_success());
let request = srv.request(http::Method::GET, srv.url("/routes/test2"));
let response = request.send().await.unwrap();
assert!(response.status().is_success());
let request = srv.request(http::Method::POST, srv.url("/routes/test"));
let response = request.send().await.unwrap();
assert!(response.status().is_success());
let request = srv.request(http::Method::GET, srv.url("/routes/not-set"));
let response = request.send().await.unwrap();
assert!(response.status().is_client_error());
let request = srv.request(http::Method::GET, srv.url("/routes/overlap/test"));
let response = request.send().await.unwrap();
assert!(response.status().is_success());
let request = srv.request(http::Method::GET, srv.url("/routes/overlap/bar"));
let response = request.send().await.unwrap();
assert!(response.status().is_success());
let request = srv.request(http::Method::GET, srv.url("/routes/overlap2/test"));
let response = request.send().await.unwrap();
assert!(response.status().is_success());
let request = srv.request(http::Method::GET, srv.url("/routes/overlap2/bar"));
let response = request.send().await.unwrap();
assert!(response.status().is_success());
let request = srv.request(http::Method::GET, srv.url("/custom_resource_name")); let request = srv.request(http::Method::GET, srv.url("/custom_resource_name"));
let response = request.send().await.unwrap(); let response = request.send().await.unwrap();
assert!(response.status().is_success()); assert!(response.status().is_success());

View File

@ -1,4 +1,4 @@
#[rustversion::stable(1.56)] // MSRV #[rustversion::stable(1.59)] // MSRV
#[test] #[test]
fn compile_macros() { fn compile_macros() {
let t = trybuild::TestCases::new(); let t = trybuild::TestCases::new();
@ -12,6 +12,10 @@ fn compile_macros() {
t.compile_fail("tests/trybuild/route-unexpected-method-fail.rs"); t.compile_fail("tests/trybuild/route-unexpected-method-fail.rs");
t.compile_fail("tests/trybuild/route-malformed-path-fail.rs"); t.compile_fail("tests/trybuild/route-malformed-path-fail.rs");
t.pass("tests/trybuild/routes-ok.rs");
t.compile_fail("tests/trybuild/routes-missing-method-fail.rs");
t.compile_fail("tests/trybuild/routes-missing-args-fail.rs");
t.pass("tests/trybuild/docstring-ok.rs"); t.pass("tests/trybuild/docstring-ok.rs");
t.pass("tests/trybuild/test-runtime.rs"); t.pass("tests/trybuild/test-runtime.rs");

View File

@ -1,11 +1,19 @@
error: HTTP method defined more than once: `GET` error: HTTP method defined more than once: `GET`
--> $DIR/route-duplicate-method-fail.rs:3:35 --> tests/trybuild/route-duplicate-method-fail.rs:3:35
| |
3 | #[route("/", method="GET", method="GET")] 3 | #[route("/", method="GET", method="GET")]
| ^^^^^ | ^^^^^
error[E0277]: the trait bound `fn() -> impl std::future::Future {index}: HttpServiceFactory` is not satisfied error[E0277]: the trait bound `fn() -> impl std::future::Future<Output = String> {index}: HttpServiceFactory` is not satisfied
--> $DIR/route-duplicate-method-fail.rs:12:55 --> tests/trybuild/route-duplicate-method-fail.rs:12:55
| |
12 | let srv = actix_test::start(|| App::new().service(index)); 12 | let srv = actix_test::start(|| App::new().service(index));
| ^^^^^ the trait `HttpServiceFactory` is not implemented for `fn() -> impl std::future::Future {index}` | ------- ^^^^^ the trait `HttpServiceFactory` is not implemented for `fn() -> impl std::future::Future<Output = String> {index}`
| |
| required by a bound introduced by this call
|
note: required by a bound in `App::<T>::service`
--> $WORKSPACE/actix-web/src/app.rs
|
| F: HttpServiceFactory + 'static,
| ^^^^^^^^^^^^^^^^^^ required by this bound in `App::<T>::service`

View File

@ -6,8 +6,16 @@ error: The #[route(..)] macro requires at least one `method` attribute
| |
= note: this error originates in the attribute macro `route` (in Nightly builds, run with -Z macro-backtrace for more info) = note: this error originates in the attribute macro `route` (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0277]: the trait bound `fn() -> impl std::future::Future {index}: HttpServiceFactory` is not satisfied error[E0277]: the trait bound `fn() -> impl std::future::Future<Output = String> {index}: HttpServiceFactory` is not satisfied
--> tests/trybuild/route-missing-method-fail.rs:12:55 --> tests/trybuild/route-missing-method-fail.rs:12:55
| |
12 | let srv = actix_test::start(|| App::new().service(index)); 12 | let srv = actix_test::start(|| App::new().service(index));
| ^^^^^ the trait `HttpServiceFactory` is not implemented for `fn() -> impl std::future::Future {index}` | ------- ^^^^^ the trait `HttpServiceFactory` is not implemented for `fn() -> impl std::future::Future<Output = String> {index}`
| |
| required by a bound introduced by this call
|
note: required by a bound in `App::<T>::service`
--> $WORKSPACE/actix-web/src/app.rs
|
| F: HttpServiceFactory + 'static,
| ^^^^^^^^^^^^^^^^^^ required by this bound in `App::<T>::service`

View File

@ -1,11 +1,19 @@
error: Unexpected HTTP method: `UNEXPECTED` error: Unexpected HTTP method: `UNEXPECTED`
--> $DIR/route-unexpected-method-fail.rs:3:21 --> tests/trybuild/route-unexpected-method-fail.rs:3:21
| |
3 | #[route("/", method="UNEXPECTED")] 3 | #[route("/", method="UNEXPECTED")]
| ^^^^^^^^^^^^ | ^^^^^^^^^^^^
error[E0277]: the trait bound `fn() -> impl std::future::Future {index}: HttpServiceFactory` is not satisfied error[E0277]: the trait bound `fn() -> impl std::future::Future<Output = String> {index}: HttpServiceFactory` is not satisfied
--> $DIR/route-unexpected-method-fail.rs:12:55 --> tests/trybuild/route-unexpected-method-fail.rs:12:55
| |
12 | let srv = actix_test::start(|| App::new().service(index)); 12 | let srv = actix_test::start(|| App::new().service(index));
| ^^^^^ the trait `HttpServiceFactory` is not implemented for `fn() -> impl std::future::Future {index}` | ------- ^^^^^ the trait `HttpServiceFactory` is not implemented for `fn() -> impl std::future::Future<Output = String> {index}`
| |
| required by a bound introduced by this call
|
note: required by a bound in `App::<T>::service`
--> $WORKSPACE/actix-web/src/app.rs
|
| F: HttpServiceFactory + 'static,
| ^^^^^^^^^^^^^^^^^^ required by this bound in `App::<T>::service`

Some files were not shown because too many files have changed in this diff Show More