mirror of
https://github.com/fafhrd91/actix-web
synced 2025-07-03 09:36:36 +02:00
Compare commits
141 Commits
http-v3.1.
...
http-v3.3.
Author | SHA1 | Date | |
---|---|---|---|
e0939a01fc | |||
20c7c07dc0 | |||
d7c6774ad5 | |||
67efa4a4db | |||
d77bcb0b7c | |||
c4db9a1ae2 | |||
740d0c0c9d | |||
f27584046c | |||
129b78f9c7 | |||
ad27150c5f | |||
8d5d6a2598 | |||
e97329eb2a | |||
fbfff3e751 | |||
fdfb3d45db | |||
4e05629368 | |||
e35ec28cd2 | |||
35006e9cae | |||
115701eb86 | |||
e2fed91efd | |||
d4b833ccf0 | |||
358c1cf85b | |||
42193bee29 | |||
dc08ea044b | |||
85d88ffada | |||
bf19a0e761 | |||
bf1f169be2 | |||
359d5d5c80 | |||
65c0545a7a | |||
b933ed4456 | |||
4bff1d0abe | |||
fa106da555 | |||
c15016dafb | |||
74688843ba | |||
845156da85 | |||
98752c053c | |||
df6fde883c | |||
8d4cb8c69a | |||
dd9ac4d9b8 | |||
72c80f9107 | |||
b00fe72cf6 | |||
2f0b8a264a | |||
b9f0faafde | |||
6627109984 | |||
b9f54c8796 | |||
cfd40b4f15 | |||
08c2cdf641 | |||
fbd0e5dd0a | |||
7b936bc443 | |||
d2364c80c4 | |||
77459ec415 | |||
6f0a6bd1bb | |||
06c3513bc0 | |||
29bd6a1dd5 | |||
17f7cd2aae | |||
ede645ee4e | |||
6d48593a60 | |||
3c69d078b2 | |||
e7c34f2e45 | |||
d708a4de6d | |||
d97bd7ec17 | |||
fcd06c9896 | |||
1065043528 | |||
45b77c6819 | |||
a2e2c30d59 | |||
83cd061c86 | |||
068909f1b3 | |||
f8cb71e789 | |||
73b94e902d | |||
ad7e67f940 | |||
1519ae7772 | |||
cc7145d41d | |||
172c4c7a0a | |||
fd63305859 | |||
ef64d6a27c | |||
4d3689db5e | |||
894effb856 | |||
07a7290432 | |||
bd5c0af0a6 | |||
c73fba16ce | |||
909461087c | |||
40f7ab38d2 | |||
a9e44bcf07 | |||
7767cf3071 | |||
b59a96d9d7 | |||
037740bf62 | |||
386258c285 | |||
99bf774e94 | |||
35b0fd1a85 | |||
0b5b4dcbf3 | |||
c993055fc8 | |||
679f61cf37 | |||
056de320f0 | |||
f220719fae | |||
c9f91796df | |||
ea764b1d57 | |||
19aa14a9d6 | |||
10746fb2fb | |||
4bbe60b609 | |||
8ff489aa90 | |||
e0a88cea8d | |||
d78ff283af | |||
ce6d520215 | |||
3e25742a41 | |||
20f4cfe6b5 | |||
6408291ab0 | |||
8d260e599f | |||
14bcf72ec1 | |||
6485434a33 | |||
16c7c16463 | |||
9b0fdca6e9 | |||
8759d79b03 | |||
c0d5d7bdb5 | |||
40eab1f091 | |||
75517cce82 | |||
9b51624b27 | |||
8e2ae8cd40 | |||
9a2f8450e0 | |||
23ef51609e | |||
f7d629a61a | |||
e0845d9ad9 | |||
2f79daec16 | |||
f3f41a0cc7 | |||
987067698b | |||
b62f1b4ef7 | |||
df5257c373 | |||
226ea696ce | |||
e524fc86ea | |||
7e990e423f | |||
8f9a12ed5d | |||
c6eba2da9b | |||
06c7945801 | |||
0dba6310c6 | |||
f7d7d92984 | |||
3d6ea7fe9b | |||
8dbf7da89f | |||
de92b3be2e | |||
5d0e8138ee | |||
6b7196225e | |||
265fa0d050 | |||
062127a210 | |||
3926416580 |
10
.github/ISSUE_TEMPLATE/bug_report.md
vendored
10
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -3,34 +3,40 @@ name: Bug Report
|
||||
about: Create a bug report.
|
||||
---
|
||||
|
||||
Your issue may already be reported!
|
||||
Please search on the [Actix Web issue tracker](https://github.com/actix/actix-web/issues) before creating one.
|
||||
Your issue may already be reported! Please search on the [Actix Web issue tracker](https://github.com/actix/actix-web/issues) before creating one.
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
<!--- If you're describing a bug, tell us what should happen -->
|
||||
<!--- If you're suggesting a change/improvement, tell us how it should work -->
|
||||
|
||||
## Current Behavior
|
||||
|
||||
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
|
||||
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
|
||||
|
||||
## Possible Solution
|
||||
|
||||
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
|
||||
<!--- or ideas how to implement the addition or change -->
|
||||
|
||||
## Steps to Reproduce (for bugs)
|
||||
|
||||
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
|
||||
<!--- reproduce this bug. Include code to reproduce, if relevant -->
|
||||
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
4.
|
||||
|
||||
## Context
|
||||
|
||||
<!--- How has this issue affected you? What are you trying to accomplish? -->
|
||||
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
|
||||
|
||||
## Your Environment
|
||||
|
||||
<!--- Include as many relevant details about the environment you experienced the bug in -->
|
||||
|
||||
- Rust Version (I.e, output of `rustc -V`):
|
||||
|
7
.github/PULL_REQUEST_TEMPLATE.md
vendored
7
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -2,12 +2,14 @@
|
||||
<!-- Please fill out the following to get your PR reviewed quicker. -->
|
||||
|
||||
## PR Type
|
||||
|
||||
<!-- What kind of change does this PR make? -->
|
||||
<!-- Bug Fix / Feature / Refactor / Code Style / Other -->
|
||||
|
||||
PR_TYPE
|
||||
|
||||
|
||||
## PR Checklist
|
||||
|
||||
<!-- Check your PR fulfills the following items. -->
|
||||
<!-- For draft PRs check the boxes as you complete them. -->
|
||||
|
||||
@ -17,11 +19,10 @@ PR_TYPE
|
||||
- [ ] Format code with the latest stable rustfmt.
|
||||
- [ ] (Team) Label with affected crates and semver status.
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
<!-- Describe the current and new behavior. -->
|
||||
<!-- Emphasize any breaking changes. -->
|
||||
|
||||
|
||||
<!-- If this PR fixes or closes an issue, reference it here. -->
|
||||
<!-- Closes #000 -->
|
||||
|
3
.github/workflows/bench.yml
vendored
3
.github/workflows/bench.yml
vendored
@ -5,6 +5,9 @@ on:
|
||||
branches:
|
||||
- master
|
||||
|
||||
permissions:
|
||||
contents: read # to fetch code (actions/checkout)
|
||||
|
||||
jobs:
|
||||
check_benchmark:
|
||||
runs-on: ubuntu-latest
|
||||
|
76
.github/workflows/ci-post-merge.yml
vendored
76
.github/workflows/ci-post-merge.yml
vendored
@ -4,6 +4,9 @@ on:
|
||||
push:
|
||||
branches: [master]
|
||||
|
||||
permissions:
|
||||
contents: read # to fetch code (actions/checkout)
|
||||
|
||||
jobs:
|
||||
build_and_test_nightly:
|
||||
strategy:
|
||||
@ -23,6 +26,7 @@ jobs:
|
||||
CI: 1
|
||||
CARGO_INCREMENTAL: 0
|
||||
VCPKGRS_DYNAMIC: 1
|
||||
CARGO_UNSTABLE_SPARSE_REGISTRY: true
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
@ -44,18 +48,15 @@ jobs:
|
||||
profile: minimal
|
||||
override: true
|
||||
|
||||
- name: Install cargo-hack
|
||||
uses: taiki-e/install-action@cargo-hack
|
||||
|
||||
- name: Generate Cargo.lock
|
||||
uses: actions-rs/cargo@v1
|
||||
with: { command: generate-lockfile }
|
||||
- name: Cache Dependencies
|
||||
uses: Swatinem/rust-cache@v1.2.0
|
||||
|
||||
- name: Install cargo-hack
|
||||
uses: actions-rs/cargo@v1
|
||||
with:
|
||||
command: install
|
||||
args: cargo-hack
|
||||
|
||||
- name: check minimal
|
||||
uses: actions-rs/cargo@v1
|
||||
with: { command: ci-check-min }
|
||||
@ -80,69 +81,56 @@ jobs:
|
||||
|
||||
- name: Clear the cargo caches
|
||||
run: |
|
||||
cargo install cargo-cache --version 0.6.3 --no-default-features --features ci-autoclean
|
||||
cargo install cargo-cache --version 0.8.2 --no-default-features --features ci-autoclean
|
||||
cargo-cache
|
||||
|
||||
ci_feature_powerset_check:
|
||||
name: Verify Feature Combinations
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
env:
|
||||
CI: 1
|
||||
CARGO_INCREMENTAL: 0
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Install stable
|
||||
uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
toolchain: stable-x86_64-unknown-linux-gnu
|
||||
profile: minimal
|
||||
override: true
|
||||
- uses: dtolnay/rust-toolchain@stable
|
||||
|
||||
- name: Install cargo-hack
|
||||
uses: taiki-e/install-action@cargo-hack
|
||||
|
||||
- name: Generate Cargo.lock
|
||||
uses: actions-rs/cargo@v1
|
||||
with: { command: generate-lockfile }
|
||||
run: cargo generate-lockfile
|
||||
- name: Cache Dependencies
|
||||
uses: Swatinem/rust-cache@v1.2.0
|
||||
|
||||
- name: Install cargo-hack
|
||||
uses: actions-rs/cargo@v1
|
||||
with:
|
||||
command: install
|
||||
args: cargo-hack
|
||||
- name: check feature combinations
|
||||
run: cargo ci-check-all-feature-powerset
|
||||
|
||||
- name: check feature combinations
|
||||
uses: actions-rs/cargo@v1
|
||||
with: { command: ci-check-all-feature-powerset }
|
||||
|
||||
- name: check feature combinations
|
||||
uses: actions-rs/cargo@v1
|
||||
with: { command: ci-check-all-feature-powerset-linux }
|
||||
run: cargo ci-check-all-feature-powerset-linux
|
||||
|
||||
nextest:
|
||||
name: nextest
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
env:
|
||||
CI: 1
|
||||
CARGO_INCREMENTAL: 0
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Install Rust
|
||||
uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
toolchain: stable
|
||||
profile: minimal
|
||||
override: true
|
||||
- uses: dtolnay/rust-toolchain@stable
|
||||
|
||||
- name: Install nextest
|
||||
uses: taiki-e/install-action@nextest
|
||||
|
||||
- name: Generate Cargo.lock
|
||||
uses: actions-rs/cargo@v1
|
||||
with: { command: generate-lockfile }
|
||||
run: cargo generate-lockfile
|
||||
- name: Cache Dependencies
|
||||
uses: Swatinem/rust-cache@v1.3.0
|
||||
|
||||
- name: Install cargo-nextest
|
||||
uses: actions-rs/cargo@v1
|
||||
with:
|
||||
command: install
|
||||
args: cargo-nextest
|
||||
|
||||
- name: Test with cargo-nextest
|
||||
uses: actions-rs/cargo@v1
|
||||
with:
|
||||
command: nextest
|
||||
args: run
|
||||
run: cargo nextest run
|
||||
|
49
.github/workflows/ci.yml
vendored
49
.github/workflows/ci.yml
vendored
@ -6,6 +6,9 @@ on:
|
||||
push:
|
||||
branches: [master]
|
||||
|
||||
permissions:
|
||||
contents: read # to fetch code (actions/checkout)
|
||||
|
||||
jobs:
|
||||
build_and_test:
|
||||
strategy:
|
||||
@ -16,7 +19,7 @@ jobs:
|
||||
- { name: macOS, os: macos-latest, triple: x86_64-apple-darwin }
|
||||
- { name: Windows, os: windows-2022, triple: x86_64-pc-windows-msvc }
|
||||
version:
|
||||
- 1.56.0 # MSRV
|
||||
- 1.59.0 # MSRV
|
||||
- stable
|
||||
|
||||
name: ${{ matrix.target.name }} / ${{ matrix.version }}
|
||||
@ -47,17 +50,26 @@ jobs:
|
||||
profile: minimal
|
||||
override: true
|
||||
|
||||
- name: Install cargo-hack
|
||||
uses: taiki-e/install-action@cargo-hack
|
||||
|
||||
- name: workaround MSRV issues
|
||||
if: matrix.version != 'stable'
|
||||
run: |
|
||||
cargo install cargo-edit --version=0.8.0
|
||||
cargo add const-str@0.3 --dev -p=actix-web
|
||||
cargo add const-str@0.3 --dev -p=awc
|
||||
|
||||
- name: Generate Cargo.lock
|
||||
uses: actions-rs/cargo@v1
|
||||
with: { command: generate-lockfile }
|
||||
- name: Cache Dependencies
|
||||
uses: Swatinem/rust-cache@v1.2.0
|
||||
|
||||
- name: Install cargo-hack
|
||||
uses: actions-rs/cargo@v1
|
||||
with:
|
||||
command: install
|
||||
args: cargo-hack
|
||||
- name: workaround MSRV issues
|
||||
if: matrix.version != 'stable'
|
||||
run: |
|
||||
cargo update -p=zstd-sys --precise=2.0.1+zstd.1.5.2
|
||||
|
||||
- name: check minimal
|
||||
uses: actions-rs/cargo@v1
|
||||
@ -83,7 +95,7 @@ jobs:
|
||||
|
||||
- name: Clear the cargo caches
|
||||
run: |
|
||||
cargo install cargo-cache --version 0.6.3 --no-default-features --features ci-autoclean
|
||||
cargo install cargo-cache --version 0.8.2 --no-default-features --features ci-autoclean
|
||||
cargo-cache
|
||||
|
||||
io-uring:
|
||||
@ -92,16 +104,10 @@ jobs:
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Install Rust
|
||||
uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
toolchain: stable-x86_64-unknown-linux-gnu
|
||||
profile: minimal
|
||||
override: true
|
||||
- uses: dtolnay/rust-toolchain@stable
|
||||
|
||||
- name: Generate Cargo.lock
|
||||
uses: actions-rs/cargo@v1
|
||||
with: { command: generate-lockfile }
|
||||
run: cargo generate-lockfile
|
||||
- name: Cache Dependencies
|
||||
uses: Swatinem/rust-cache@v1.3.0
|
||||
|
||||
@ -119,20 +125,13 @@ jobs:
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Install Rust (nightly)
|
||||
uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
toolchain: nightly-x86_64-unknown-linux-gnu
|
||||
profile: minimal
|
||||
override: true
|
||||
- uses: dtolnay/rust-toolchain@nightly
|
||||
|
||||
- name: Generate Cargo.lock
|
||||
uses: actions-rs/cargo@v1
|
||||
with: { command: generate-lockfile }
|
||||
run: cargo generate-lockfile
|
||||
- name: Cache Dependencies
|
||||
uses: Swatinem/rust-cache@v1.3.0
|
||||
|
||||
- name: doc tests
|
||||
uses: actions-rs/cargo@v1
|
||||
run: cargo ci-doctest
|
||||
timeout-minutes: 60
|
||||
with: { command: ci-doctest }
|
||||
|
41
.github/workflows/clippy-fmt.yml
vendored
41
.github/workflows/clippy-fmt.yml
vendored
@ -9,54 +9,37 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Install Rust
|
||||
uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
toolchain: stable
|
||||
profile: minimal
|
||||
components: rustfmt
|
||||
- name: Check with rustfmt
|
||||
uses: actions-rs/cargo@v1
|
||||
with:
|
||||
command: fmt
|
||||
args: --all -- --check
|
||||
- uses: dtolnay/rust-toolchain@nightly
|
||||
with: { components: rustfmt }
|
||||
- run: cargo fmt --all -- --check
|
||||
|
||||
clippy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Install Rust
|
||||
uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
toolchain: stable
|
||||
profile: minimal
|
||||
components: clippy
|
||||
override: true
|
||||
- uses: dtolnay/rust-toolchain@stable
|
||||
with: { components: clippy }
|
||||
|
||||
- name: Generate Cargo.lock
|
||||
uses: actions-rs/cargo@v1
|
||||
with: { command: generate-lockfile }
|
||||
run: cargo generate-lockfile
|
||||
- name: Cache Dependencies
|
||||
uses: Swatinem/rust-cache@v1.2.0
|
||||
|
||||
|
||||
- name: Check with Clippy
|
||||
uses: actions-rs/clippy-check@v1
|
||||
with:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
args: --workspace --tests --examples --all-features
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
lint-docs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Install Rust
|
||||
uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
toolchain: stable
|
||||
profile: minimal
|
||||
components: rust-docs
|
||||
|
||||
- uses: dtolnay/rust-toolchain@stable
|
||||
with: { components: rust-docs }
|
||||
|
||||
- name: Check for broken intra-doc links
|
||||
uses: actions-rs/cargo@v1
|
||||
env:
|
||||
|
25
.github/workflows/upload-doc.yml
vendored
25
.github/workflows/upload-doc.yml
vendored
@ -4,32 +4,29 @@ on:
|
||||
push:
|
||||
branches: [master]
|
||||
|
||||
permissions: {}
|
||||
jobs:
|
||||
build:
|
||||
permissions:
|
||||
contents: write # to push changes in repo (jamesives/github-pages-deploy-action)
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Install Rust
|
||||
uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
toolchain: nightly-x86_64-unknown-linux-gnu
|
||||
profile: minimal
|
||||
override: true
|
||||
- uses: dtolnay/rust-toolchain@nightly
|
||||
|
||||
- name: Build Docs
|
||||
uses: actions-rs/cargo@v1
|
||||
with:
|
||||
command: doc
|
||||
args: --workspace --all-features --no-deps
|
||||
run: cargo +nightly doc --no-deps --workspace --all-features
|
||||
env:
|
||||
RUSTDOCFLAGS: --cfg=docsrs
|
||||
|
||||
- name: Tweak HTML
|
||||
run: echo '<meta http-equiv="refresh" content="0;url=actix_web/index.html">' > target/doc/index.html
|
||||
|
||||
- name: Deploy to GitHub Pages
|
||||
uses: JamesIves/github-pages-deploy-action@3.7.1
|
||||
uses: JamesIves/github-pages-deploy-action@v4.4.1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
BRANCH: gh-pages
|
||||
FOLDER: target/doc
|
||||
folder: target/doc
|
||||
single-commit: true
|
||||
|
@ -1,3 +0,0 @@
|
||||
{
|
||||
"proseWrap": "never"
|
||||
}
|
1
.prettierrc.yaml
Normal file
1
.prettierrc.yaml
Normal file
@ -0,0 +1 @@
|
||||
proseWrap: never
|
@ -5,6 +5,7 @@ members = [
|
||||
"actix-http-test",
|
||||
"actix-http",
|
||||
"actix-multipart",
|
||||
"actix-multipart-derive",
|
||||
"actix-router",
|
||||
"actix-test",
|
||||
"actix-web-actors",
|
||||
@ -27,6 +28,7 @@ actix-files = { path = "actix-files" }
|
||||
actix-http = { path = "actix-http" }
|
||||
actix-http-test = { path = "actix-http-test" }
|
||||
actix-multipart = { path = "actix-multipart" }
|
||||
actix-multipart-derive = { path = "actix-multipart-derive" }
|
||||
actix-router = { path = "actix-router" }
|
||||
actix-test = { path = "actix-test" }
|
||||
actix-web = { path = "actix-web" }
|
||||
|
@ -1,9 +1,24 @@
|
||||
# Changes
|
||||
|
||||
## Unreleased - 2021-xx-xx
|
||||
## Unreleased - 2023-xx-xx
|
||||
|
||||
## 0.6.3 - 2023-01-21
|
||||
|
||||
- XHTML files now use `Content-Disposition: inline` instead of `attachment`. [#2903]
|
||||
- Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
|
||||
- Update `tokio-uring` dependency to `0.4`.
|
||||
|
||||
[#2903]: https://github.com/actix/actix-web/pull/2903
|
||||
|
||||
## 0.6.2 - 2022-07-23
|
||||
|
||||
- Allow partial range responses for video content to start streaming sooner. [#2817]
|
||||
- Minimum supported Rust version (MSRV) is now 1.57 due to transitive `time` dependency.
|
||||
|
||||
[#2817]: https://github.com/actix/actix-web/pull/2817
|
||||
|
||||
## 0.6.1 - 2022-06-11
|
||||
|
||||
- Add `NamedFile::{modified, metadata, content_type, content_disposition, encoding}()` getters. [#2021]
|
||||
- Update `tokio-uring` dependency to `0.3`.
|
||||
- Audio files now use `Content-Disposition: inline` instead of `attachment`. [#2645]
|
||||
@ -12,46 +27,46 @@
|
||||
[#2021]: https://github.com/actix/actix-web/pull/2021
|
||||
[#2645]: https://github.com/actix/actix-web/pull/2645
|
||||
|
||||
|
||||
## 0.6.0 - 2022-02-25
|
||||
|
||||
- No significant changes since `0.6.0-beta.16`.
|
||||
|
||||
|
||||
## 0.6.0-beta.16 - 2022-01-31
|
||||
|
||||
- No significant changes since `0.6.0-beta.15`.
|
||||
|
||||
|
||||
## 0.6.0-beta.15 - 2022-01-21
|
||||
|
||||
- No significant changes since `0.6.0-beta.14`.
|
||||
|
||||
|
||||
## 0.6.0-beta.14 - 2022-01-14
|
||||
|
||||
- The `prefer_utf8` option introduced in `0.4.0` is now true by default. [#2583]
|
||||
|
||||
[#2583]: https://github.com/actix/actix-web/pull/2583
|
||||
|
||||
|
||||
## 0.6.0-beta.13 - 2022-01-04
|
||||
|
||||
- The `Files` service now rejects requests with URL paths that include `%2F` (decoded: `/`). [#2398]
|
||||
- The `Files` service now correctly decodes `%25` in the URL path to `%` for the file path. [#2398]
|
||||
- Minimum supported Rust version (MSRV) is now 1.54.
|
||||
|
||||
[#2398]: https://github.com/actix/actix-web/pull/2398
|
||||
|
||||
|
||||
## 0.6.0-beta.12 - 2021-12-29
|
||||
|
||||
- No significant changes since `0.6.0-beta.11`.
|
||||
|
||||
|
||||
## 0.6.0-beta.11 - 2021-12-27
|
||||
|
||||
- No significant changes since `0.6.0-beta.10`.
|
||||
|
||||
|
||||
## 0.6.0-beta.10 - 2021-12-11
|
||||
|
||||
- No significant changes since `0.6.0-beta.9`.
|
||||
|
||||
|
||||
## 0.6.0-beta.9 - 2021-11-22
|
||||
|
||||
- Add crate feature `experimental-io-uring`, enabling async file I/O to be utilized. This feature is only available on Linux OSes with recent kernel versions. This feature is semver-exempt. [#2408]
|
||||
- Add `NamedFile::open_async`. [#2408]
|
||||
- Fix 304 Not Modified responses to omit the Content-Length header, as per the spec. [#2453]
|
||||
@ -62,24 +77,24 @@
|
||||
[#2408]: https://github.com/actix/actix-web/pull/2408
|
||||
[#2453]: https://github.com/actix/actix-web/pull/2453
|
||||
|
||||
|
||||
## 0.6.0-beta.8 - 2021-10-20
|
||||
|
||||
- Minimum supported Rust version (MSRV) is now 1.52.
|
||||
|
||||
|
||||
## 0.6.0-beta.7 - 2021-09-09
|
||||
|
||||
- Minimum supported Rust version (MSRV) is now 1.51.
|
||||
|
||||
|
||||
## 0.6.0-beta.6 - 2021-06-26
|
||||
|
||||
- Added `Files::path_filter()`. [#2274]
|
||||
- `Files::show_files_listing()` can now be used with `Files::index_file()` to show files listing as a fallback when the index file is not found. [#2228]
|
||||
|
||||
[#2274]: https://github.com/actix/actix-web/pull/2274
|
||||
[#2228]: https://github.com/actix/actix-web/pull/2228
|
||||
|
||||
|
||||
## 0.6.0-beta.5 - 2021-06-17
|
||||
|
||||
- `NamedFile` now implements `ServiceFactory` and `HttpServiceFactory` making it much more useful in routing. For example, it can be used directly as a default service. [#2135]
|
||||
- For symbolic links, `Content-Disposition` header no longer shows the filename of the original file. [#2156]
|
||||
- `Files::redirect_to_slash_directory()` now works as expected when used with `Files::show_files_listing()`. [#2225]
|
||||
@ -90,58 +105,58 @@
|
||||
[#2225]: https://github.com/actix/actix-web/pull/2225
|
||||
[#2257]: https://github.com/actix/actix-web/pull/2257
|
||||
|
||||
|
||||
## 0.6.0-beta.4 - 2021-04-02
|
||||
|
||||
- Add support for `.guard` in `Files` to selectively filter `Files` services. [#2046]
|
||||
|
||||
[#2046]: https://github.com/actix/actix-web/pull/2046
|
||||
|
||||
|
||||
## 0.6.0-beta.3 - 2021-03-09
|
||||
|
||||
- No notable changes.
|
||||
|
||||
|
||||
## 0.6.0-beta.2 - 2021-02-10
|
||||
|
||||
- Fix If-Modified-Since and If-Unmodified-Since to not compare using sub-second timestamps. [#1887]
|
||||
- Replace `v_htmlescape` with `askama_escape`. [#1953]
|
||||
|
||||
[#1887]: https://github.com/actix/actix-web/pull/1887
|
||||
[#1953]: https://github.com/actix/actix-web/pull/1953
|
||||
|
||||
|
||||
## 0.6.0-beta.1 - 2021-01-07
|
||||
|
||||
- `HttpRange::parse` now has its own error type.
|
||||
- Update `bytes` to `1.0`. [#1813]
|
||||
|
||||
[#1813]: https://github.com/actix/actix-web/pull/1813
|
||||
|
||||
|
||||
## 0.5.0 - 2020-12-26
|
||||
|
||||
- Optionally support hidden files/directories. [#1811]
|
||||
|
||||
[#1811]: https://github.com/actix/actix-web/pull/1811
|
||||
|
||||
|
||||
## 0.4.1 - 2020-11-24
|
||||
|
||||
- Clarify order of parameters in `Files::new` and improve docs.
|
||||
|
||||
|
||||
## 0.4.0 - 2020-10-06
|
||||
|
||||
- Add `Files::prefer_utf8` option that adds UTF-8 charset on certain response types. [#1714]
|
||||
|
||||
[#1714]: https://github.com/actix/actix-web/pull/1714
|
||||
|
||||
|
||||
## 0.3.0 - 2020-09-11
|
||||
|
||||
- No significant changes from 0.3.0-beta.1.
|
||||
|
||||
|
||||
## 0.3.0-beta.1 - 2020-07-15
|
||||
|
||||
- Update `v_htmlescape` to 0.10
|
||||
- Update `actix-web` and `actix-http` dependencies to beta.1
|
||||
|
||||
|
||||
## 0.3.0-alpha.1 - 2020-05-23
|
||||
|
||||
- Update `actix-web` and `actix-http` dependencies to alpha
|
||||
- Fix some typos in the docs
|
||||
- Bump minimum supported Rust version to 1.40
|
||||
@ -149,73 +164,73 @@
|
||||
|
||||
[#1384]: https://github.com/actix/actix-web/pull/1384
|
||||
|
||||
|
||||
## 0.2.1 - 2019-12-22
|
||||
|
||||
- Use the same format for file URLs regardless of platforms
|
||||
|
||||
|
||||
## 0.2.0 - 2019-12-20
|
||||
|
||||
- Fix BodyEncoding trait import #1220
|
||||
|
||||
|
||||
## 0.2.0-alpha.1 - 2019-12-07
|
||||
|
||||
- Migrate to `std::future`
|
||||
|
||||
|
||||
## 0.1.7 - 2019-11-06
|
||||
- Add an additional `filename*` param in the `Content-Disposition` header of
|
||||
`actix_files::NamedFile` to be more compatible. (#1151)
|
||||
|
||||
- Add an additional `filename*` param in the `Content-Disposition` header of `actix_files::NamedFile` to be more compatible. (#1151)
|
||||
|
||||
## 0.1.6 - 2019-10-14
|
||||
|
||||
- Add option to redirect to a slash-ended path `Files` #1132
|
||||
|
||||
|
||||
## 0.1.5 - 2019-10-08
|
||||
|
||||
- Bump up `mime_guess` crate version to 2.0.1
|
||||
- Bump up `percent-encoding` crate version to 2.1
|
||||
- Allow user defined request guards for `Files` #1113
|
||||
|
||||
|
||||
## 0.1.4 - 2019-07-20
|
||||
|
||||
- Allow to disable `Content-Disposition` header #686
|
||||
|
||||
|
||||
## 0.1.3 - 2019-06-28
|
||||
|
||||
- Do not set `Content-Length` header, let actix-http set it #930
|
||||
|
||||
|
||||
## 0.1.2 - 2019-06-13
|
||||
|
||||
- Content-Length is 0 for NamedFile HEAD request #914
|
||||
- Fix ring dependency from actix-web default features for #741
|
||||
|
||||
|
||||
## 0.1.1 - 2019-06-01
|
||||
|
||||
- Static files are incorrectly served as both chunked and with length #812
|
||||
|
||||
|
||||
## 0.1.0 - 2019-05-25
|
||||
|
||||
- NamedFile last-modified check always fails due to nano-seconds in file modified date #820
|
||||
|
||||
|
||||
## 0.1.0-beta.4 - 2019-05-12
|
||||
|
||||
- Update actix-web to beta.4
|
||||
|
||||
|
||||
## 0.1.0-beta.1 - 2019-04-20
|
||||
|
||||
- Update actix-web to beta.1
|
||||
|
||||
|
||||
## 0.1.0-alpha.6 - 2019-04-14
|
||||
|
||||
- Update actix-web to alpha6
|
||||
|
||||
|
||||
## 0.1.0-alpha.4 - 2019-04-08
|
||||
|
||||
- Update actix-web to alpha4
|
||||
|
||||
|
||||
## 0.1.0-alpha.2 - 2019-04-02
|
||||
|
||||
- Add default handler support
|
||||
|
||||
|
||||
## 0.1.0-alpha.1 - 2019-03-28
|
||||
|
||||
- Initial impl
|
||||
|
@ -1,9 +1,8 @@
|
||||
[package]
|
||||
name = "actix-files"
|
||||
version = "0.6.1"
|
||||
version = "0.6.3"
|
||||
authors = [
|
||||
"Nikolay Kim <fafhrd91@gmail.com>",
|
||||
"fakeshadow <24548779@qq.com>",
|
||||
"Rob Ede <robjtede@icloud.com>",
|
||||
]
|
||||
description = "Static file serving for Actix Web"
|
||||
@ -27,25 +26,25 @@ actix-service = "2"
|
||||
actix-utils = "3"
|
||||
actix-web = { version = "4", default-features = false }
|
||||
|
||||
askama_escape = "0.10"
|
||||
bitflags = "1"
|
||||
bytes = "1"
|
||||
derive_more = "0.99.5"
|
||||
futures-core = { version = "0.3.7", default-features = false, features = ["alloc"] }
|
||||
futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] }
|
||||
http-range = "0.1.4"
|
||||
log = "0.4"
|
||||
mime = "0.3"
|
||||
mime_guess = "2.0.1"
|
||||
percent-encoding = "2.1"
|
||||
pin-project-lite = "0.2.7"
|
||||
v_htmlescape= "0.15"
|
||||
|
||||
# experimental-io-uring
|
||||
[target.'cfg(target_os = "linux")'.dependencies]
|
||||
tokio-uring = { version = "0.3", optional = true, features = ["bytes"] }
|
||||
actix-server = { version = "2.1", optional = true } # ensure matching tokio-uring versions
|
||||
tokio-uring = { version = "0.4", optional = true, features = ["bytes"] }
|
||||
actix-server = { version = "2.2", optional = true } # ensure matching tokio-uring versions
|
||||
|
||||
[dev-dependencies]
|
||||
actix-rt = "2.7"
|
||||
actix-test = "0.1.0-beta.13"
|
||||
actix-test = "0.1"
|
||||
actix-web = "4"
|
||||
tempfile = "3.2"
|
||||
|
@ -3,11 +3,11 @@
|
||||
> Static file serving for Actix Web
|
||||
|
||||
[](https://crates.io/crates/actix-files)
|
||||
[](https://docs.rs/actix-files/0.6.1)
|
||||

|
||||
[](https://docs.rs/actix-files/0.6.3)
|
||||

|
||||

|
||||
<br />
|
||||
[](https://deps.rs/crate/actix-files/0.6.1)
|
||||
[](https://deps.rs/crate/actix-files/0.6.3)
|
||||
[](https://crates.io/crates/actix-files)
|
||||
[](https://discord.gg/NWpN5mmg3x)
|
||||
|
||||
@ -15,4 +15,4 @@
|
||||
|
||||
- [API Documentation](https://docs.rs/actix-files)
|
||||
- [Example Project](https://github.com/actix/examples/tree/master/basics/static-files)
|
||||
- Minimum Supported Rust Version (MSRV): 1.54
|
||||
- Minimum Supported Rust Version (MSRV): 1.59
|
||||
|
@ -1,8 +1,8 @@
|
||||
use std::{fmt::Write, fs::DirEntry, io, path::Path, path::PathBuf};
|
||||
|
||||
use actix_web::{dev::ServiceResponse, HttpRequest, HttpResponse};
|
||||
use askama_escape::{escape as escape_html_entity, Html};
|
||||
use percent_encoding::{utf8_percent_encode, CONTROLS};
|
||||
use v_htmlescape::escape as escape_html_entity;
|
||||
|
||||
/// A directory; responds with the generated directory listing.
|
||||
#[derive(Debug)]
|
||||
@ -59,7 +59,7 @@ macro_rules! encode_file_url {
|
||||
/// ```
|
||||
macro_rules! encode_file_name {
|
||||
($entry:ident) => {
|
||||
escape_html_entity(&$entry.file_name().to_string_lossy(), Html)
|
||||
escape_html_entity(&$entry.file_name().to_string_lossy())
|
||||
};
|
||||
}
|
||||
|
||||
|
@ -2,7 +2,7 @@ use actix_web::{http::StatusCode, ResponseError};
|
||||
use derive_more::Display;
|
||||
|
||||
/// Errors which can occur when serving static files.
|
||||
#[derive(Display, Debug, PartialEq)]
|
||||
#[derive(Debug, PartialEq, Eq, Display)]
|
||||
pub enum FilesError {
|
||||
/// Path is not a directory
|
||||
#[allow(dead_code)]
|
||||
@ -22,7 +22,7 @@ impl ResponseError for FilesError {
|
||||
}
|
||||
|
||||
#[allow(clippy::enum_variant_names)]
|
||||
#[derive(Display, Debug, PartialEq)]
|
||||
#[derive(Debug, PartialEq, Eq, Display)]
|
||||
#[non_exhaustive]
|
||||
pub enum UriSegmentError {
|
||||
/// The segment started with the wrapped invalid character.
|
||||
|
@ -142,7 +142,7 @@ impl Files {
|
||||
self
|
||||
}
|
||||
|
||||
/// Set custom directory renderer
|
||||
/// Set custom directory renderer.
|
||||
pub fn files_listing_renderer<F>(mut self, f: F) -> Self
|
||||
where
|
||||
for<'r, 's> F:
|
||||
@ -152,7 +152,7 @@ impl Files {
|
||||
self
|
||||
}
|
||||
|
||||
/// Specifies mime override callback
|
||||
/// Specifies MIME override callback.
|
||||
pub fn mime_override<F>(mut self, f: F) -> Self
|
||||
where
|
||||
F: Fn(&mime::Name<'_>) -> DispositionType + 'static,
|
||||
@ -390,3 +390,42 @@ impl ServiceFactory<ServiceRequest> for Files {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use actix_web::{
|
||||
http::StatusCode,
|
||||
test::{self, TestRequest},
|
||||
App, HttpResponse,
|
||||
};
|
||||
|
||||
use super::*;
|
||||
|
||||
#[actix_web::test]
|
||||
async fn custom_files_listing_renderer() {
|
||||
let srv = test::init_service(
|
||||
App::new().service(
|
||||
Files::new("/", "./tests")
|
||||
.show_files_listing()
|
||||
.files_listing_renderer(|dir, req| {
|
||||
Ok(ServiceResponse::new(
|
||||
req.clone(),
|
||||
HttpResponse::Ok().body(dir.path.to_str().unwrap().to_owned()),
|
||||
))
|
||||
}),
|
||||
),
|
||||
)
|
||||
.await;
|
||||
|
||||
let req = TestRequest::with_uri("/").to_request();
|
||||
let res = test::call_service(&srv, req).await;
|
||||
|
||||
assert_eq!(res.status(), StatusCode::OK);
|
||||
let body = test::read_body(res).await;
|
||||
assert!(
|
||||
body.ends_with(b"actix-files/tests/"),
|
||||
"body {:?} does not end with `actix-files/tests/`",
|
||||
body
|
||||
);
|
||||
}
|
||||
}
|
||||
|
@ -13,6 +13,10 @@
|
||||
|
||||
#![deny(rust_2018_idioms, nonstandard_style)]
|
||||
#![warn(future_incompatible, missing_docs, missing_debug_implementations)]
|
||||
#![allow(clippy::uninlined_format_args)]
|
||||
#![doc(html_logo_url = "https://actix.rs/img/logo.png")]
|
||||
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
|
||||
use actix_service::boxed::{BoxService, BoxServiceFactory};
|
||||
use actix_web::{
|
||||
|
@ -132,7 +132,7 @@ impl NamedFile {
|
||||
mime::IMAGE | mime::TEXT | mime::AUDIO | mime::VIDEO => DispositionType::Inline,
|
||||
mime::APPLICATION => match ct.subtype() {
|
||||
mime::JAVASCRIPT | mime::JSON => DispositionType::Inline,
|
||||
name if name == "wasm" => DispositionType::Inline,
|
||||
name if name == "wasm" || name == "xhtml" => DispositionType::Inline,
|
||||
_ => DispositionType::Attachment,
|
||||
},
|
||||
_ => DispositionType::Attachment,
|
||||
@ -528,11 +528,26 @@ impl NamedFile {
|
||||
length = ranges[0].length;
|
||||
offset = ranges[0].start;
|
||||
|
||||
// don't allow compression middleware to modify partial content
|
||||
res.insert_header((
|
||||
header::CONTENT_ENCODING,
|
||||
HeaderValue::from_static("identity"),
|
||||
));
|
||||
// When a Content-Encoding header is present in a 206 partial content response
|
||||
// for video content, it prevents browser video players from starting playback
|
||||
// before loading the whole video and also prevents seeking.
|
||||
//
|
||||
// See: https://github.com/actix/actix-web/issues/2815
|
||||
//
|
||||
// The assumption of this fix is that the video player knows to not send an
|
||||
// Accept-Encoding header for this request and that downstream middleware will
|
||||
// not attempt compression for requests without it.
|
||||
//
|
||||
// TODO: Solve question around what to do if self.encoding is set and partial
|
||||
// range is requested. Reject request? Ignoring self.encoding seems wrong, too.
|
||||
// In practice, it should not come up.
|
||||
if req.headers().contains_key(&header::ACCEPT_ENCODING) {
|
||||
// don't allow compression middleware to modify partial content
|
||||
res.insert_header((
|
||||
header::CONTENT_ENCODING,
|
||||
HeaderValue::from_static("identity"),
|
||||
));
|
||||
}
|
||||
|
||||
res.insert_header((
|
||||
header::CONTENT_RANGE,
|
||||
|
@ -30,7 +30,7 @@ impl PathBufWrap {
|
||||
let mut segment_count = path.matches('/').count() + 1;
|
||||
|
||||
// we can decode the whole path here (instead of per-segment decoding)
|
||||
// because we will reject `%2F` in paths using `segement_count`.
|
||||
// because we will reject `%2F` in paths using `segment_count`.
|
||||
let path = percent_encoding::percent_decode_str(path)
|
||||
.decode_utf8()
|
||||
.map_err(|_| UriSegmentError::NotValidUtf8)?;
|
||||
|
@ -23,7 +23,7 @@ impl Deref for FilesService {
|
||||
type Target = FilesServiceInner;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&*self.0
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1,11 +1,11 @@
|
||||
use actix_files::Files;
|
||||
use actix_files::{Files, NamedFile};
|
||||
use actix_web::{
|
||||
http::{
|
||||
header::{self, HeaderValue},
|
||||
StatusCode,
|
||||
},
|
||||
test::{self, TestRequest},
|
||||
App,
|
||||
web, App,
|
||||
};
|
||||
|
||||
#[actix_web::test]
|
||||
@ -36,3 +36,31 @@ async fn test_utf8_file_contents() {
|
||||
Some(&HeaderValue::from_static("text/plain")),
|
||||
);
|
||||
}
|
||||
|
||||
#[actix_web::test]
|
||||
async fn partial_range_response_encoding() {
|
||||
let srv = test::init_service(App::new().default_service(web::to(|| async {
|
||||
NamedFile::open_async("./tests/test.binary").await.unwrap()
|
||||
})))
|
||||
.await;
|
||||
|
||||
// range request without accept-encoding returns no content-encoding header
|
||||
let req = TestRequest::with_uri("/")
|
||||
.append_header((header::RANGE, "bytes=10-20"))
|
||||
.to_request();
|
||||
let res = test::call_service(&srv, req).await;
|
||||
assert_eq!(res.status(), StatusCode::PARTIAL_CONTENT);
|
||||
assert!(!res.headers().contains_key(header::CONTENT_ENCODING));
|
||||
|
||||
// range request with accept-encoding returns a content-encoding header
|
||||
let req = TestRequest::with_uri("/")
|
||||
.append_header((header::RANGE, "bytes=10-20"))
|
||||
.append_header((header::ACCEPT_ENCODING, "identity"))
|
||||
.to_request();
|
||||
let res = test::call_service(&srv, req).await;
|
||||
assert_eq!(res.status(), StatusCode::PARTIAL_CONTENT);
|
||||
assert_eq!(
|
||||
res.headers().get(header::CONTENT_ENCODING).unwrap(),
|
||||
"identity"
|
||||
);
|
||||
}
|
||||
|
@ -1,76 +1,97 @@
|
||||
# Changes
|
||||
|
||||
## Unreleased - 2021-xx-xx
|
||||
- Minimum supported Rust version (MSRV) is now 1.56 due to transitive `hashbrown` dependency.
|
||||
## Unreleased - 2023-xx-xx
|
||||
|
||||
## 3.1.0 - 2023-01-21
|
||||
|
||||
- Minimum supported Rust version (MSRV) is now 1.59.
|
||||
|
||||
## 3.0.0 - 2022-07-24
|
||||
|
||||
- `TestServer::stop` is now async and will wait for the server and system to shutdown. [#2442]
|
||||
- Added `TestServer::client_headers` method. [#2097]
|
||||
- Update `actix-server` dependency to `2`.
|
||||
- Update `actix-tls` dependency to `3`.
|
||||
- Update `bytes` to `1.0`. [#1813]
|
||||
- Minimum supported Rust version (MSRV) is now 1.57.
|
||||
|
||||
[#2442]: https://github.com/actix/actix-web/pull/2442
|
||||
[#2097]: https://github.com/actix/actix-web/pull/2097
|
||||
[#1813]: https://github.com/actix/actix-web/pull/1813
|
||||
|
||||
<details>
|
||||
<summary>3.0.0 Pre-Releases</summary>
|
||||
|
||||
## 3.0.0-beta.13 - 2022-02-16
|
||||
|
||||
- No significant changes since `3.0.0-beta.12`.
|
||||
|
||||
|
||||
## 3.0.0-beta.12 - 2022-01-31
|
||||
|
||||
- No significant changes since `3.0.0-beta.11`.
|
||||
|
||||
|
||||
## 3.0.0-beta.11 - 2022-01-04
|
||||
|
||||
- Minimum supported Rust version (MSRV) is now 1.54.
|
||||
|
||||
|
||||
## 3.0.0-beta.10 - 2021-12-27
|
||||
|
||||
- Update `actix-server` to `2.0.0-rc.2`. [#2550]
|
||||
|
||||
[#2550]: https://github.com/actix/actix-web/pull/2550
|
||||
|
||||
|
||||
## 3.0.0-beta.9 - 2021-12-11
|
||||
|
||||
- No significant changes since `3.0.0-beta.8`.
|
||||
|
||||
|
||||
## 3.0.0-beta.8 - 2021-11-30
|
||||
|
||||
- Update `actix-tls` to `3.0.0-rc.1`. [#2474]
|
||||
|
||||
[#2474]: https://github.com/actix/actix-web/pull/2474
|
||||
|
||||
|
||||
## 3.0.0-beta.7 - 2021-11-22
|
||||
|
||||
- Fix compatibility with experimental `io-uring` feature of `actix-rt`. [#2408]
|
||||
|
||||
[#2408]: https://github.com/actix/actix-web/pull/2408
|
||||
|
||||
|
||||
## 3.0.0-beta.6 - 2021-11-15
|
||||
|
||||
- `TestServer::stop` is now async and will wait for the server and system to shutdown. [#2442]
|
||||
- Update `actix-server` to `2.0.0-beta.9`. [#2442]
|
||||
- Minimum supported Rust version (MSRV) is now 1.52.
|
||||
|
||||
[#2442]: https://github.com/actix/actix-web/pull/2442
|
||||
|
||||
|
||||
## 3.0.0-beta.5 - 2021-09-09
|
||||
|
||||
- Minimum supported Rust version (MSRV) is now 1.51.
|
||||
|
||||
|
||||
## 3.0.0-beta.4 - 2021-04-02
|
||||
|
||||
- Added `TestServer::client_headers` method. [#2097]
|
||||
|
||||
[#2097]: https://github.com/actix/actix-web/pull/2097
|
||||
|
||||
|
||||
## 3.0.0-beta.3 - 2021-03-09
|
||||
- No notable changes.
|
||||
|
||||
- No notable changes.
|
||||
|
||||
## 3.0.0-beta.2 - 2021-02-10
|
||||
|
||||
- No notable changes.
|
||||
|
||||
|
||||
## 3.0.0-beta.1 - 2021-01-07
|
||||
|
||||
- Update `bytes` to `1.0`. [#1813]
|
||||
|
||||
[#1813]: https://github.com/actix/actix-web/pull/1813
|
||||
|
||||
</details>
|
||||
|
||||
## 2.1.0 - 2020-11-25
|
||||
|
||||
- Add ability to set address for `TestServer`. [#1645]
|
||||
- Upgrade `base64` to `0.13`.
|
||||
- Upgrade `serde_urlencoded` to `0.7`. [#1773]
|
||||
@ -78,12 +99,12 @@
|
||||
[#1773]: https://github.com/actix/actix-web/pull/1773
|
||||
[#1645]: https://github.com/actix/actix-web/pull/1645
|
||||
|
||||
|
||||
## 2.0.0 - 2020-09-11
|
||||
|
||||
- Update actix-codec and actix-utils dependencies.
|
||||
|
||||
|
||||
## 2.0.0-alpha.1 - 2020-05-23
|
||||
|
||||
- Update the `time` dependency to 0.2.7
|
||||
- Update `actix-connect` dependency to 2.0.0-alpha.2
|
||||
- Make `test_server` `async` fn.
|
||||
@ -93,55 +114,56 @@
|
||||
- Update `env_logger` dependency to 0.7
|
||||
|
||||
## 1.0.0 - 2019-12-13
|
||||
|
||||
- Replaced `TestServer::start()` with `test_server()`
|
||||
|
||||
|
||||
## 1.0.0-alpha.3 - 2019-12-07
|
||||
|
||||
- Migrate to `std::future`
|
||||
|
||||
|
||||
## 0.2.5 - 2019-09-17
|
||||
|
||||
- Update serde_urlencoded to "0.6.1"
|
||||
- Increase TestServerRuntime timeouts from 500ms to 3000ms
|
||||
- Do not override current `System`
|
||||
|
||||
|
||||
## 0.2.4 - 2019-07-18
|
||||
|
||||
- Update actix-server to 0.6
|
||||
|
||||
|
||||
## 0.2.3 - 2019-07-16
|
||||
|
||||
- Add `delete`, `options`, `patch` methods to `TestServerRunner`
|
||||
|
||||
|
||||
## 0.2.2 - 2019-06-16
|
||||
|
||||
- Add .put() and .sput() methods
|
||||
|
||||
|
||||
## 0.2.1 - 2019-06-05
|
||||
|
||||
- Add license files
|
||||
|
||||
|
||||
## 0.2.0 - 2019-05-12
|
||||
|
||||
- Update awc and actix-http deps
|
||||
|
||||
|
||||
## 0.1.1 - 2019-04-24
|
||||
|
||||
- Always make new connection for http client
|
||||
|
||||
|
||||
## 0.1.0 - 2019-04-16
|
||||
|
||||
- No changes
|
||||
|
||||
|
||||
## 0.1.0-alpha.3 - 2019-04-02
|
||||
|
||||
- Request functions accept path #743
|
||||
|
||||
|
||||
## 0.1.0-alpha.2 - 2019-03-29
|
||||
|
||||
- Added TestServerRuntime::load_body() method
|
||||
- Update actix-http and awc libraries
|
||||
|
||||
|
||||
## 0.1.0-alpha.1 - 2019-03-28
|
||||
|
||||
- Initial impl
|
||||
|
@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "actix-http-test"
|
||||
version = "3.0.0-beta.13"
|
||||
version = "3.1.0"
|
||||
authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
|
||||
description = "Various helpers for Actix applications to use during testing"
|
||||
keywords = ["http", "web", "framework", "async", "futures"]
|
||||
@ -37,10 +37,9 @@ actix-rt = "2.2"
|
||||
actix-server = "2"
|
||||
awc = { version = "3", default-features = false }
|
||||
|
||||
base64 = "0.13"
|
||||
bytes = "1"
|
||||
futures-core = { version = "0.3.7", default-features = false }
|
||||
http = "0.2.5"
|
||||
futures-core = { version = "0.3.17", default-features = false }
|
||||
http = "0.2.7"
|
||||
log = "0.4"
|
||||
socket2 = "0.4"
|
||||
serde = "1.0"
|
||||
@ -48,7 +47,7 @@ serde_json = "1.0"
|
||||
slab = "0.4"
|
||||
serde_urlencoded = "0.7"
|
||||
tls-openssl = { version = "0.10.9", package = "openssl", optional = true }
|
||||
tokio = { version = "1.8.4", features = ["sync"] }
|
||||
tokio = { version = "1.24.2", features = ["sync"] }
|
||||
|
||||
[dev-dependencies]
|
||||
actix-web = { version = "4", default-features = false, features = ["cookies"] }
|
||||
|
@ -3,15 +3,15 @@
|
||||
> Various helpers for Actix applications to use during testing.
|
||||
|
||||
[](https://crates.io/crates/actix-http-test)
|
||||
[](https://docs.rs/actix-http-test/3.0.0-beta.13)
|
||||

|
||||
[](https://docs.rs/actix-http-test/3.1.0)
|
||||

|
||||

|
||||
<br>
|
||||
[](https://deps.rs/crate/actix-http-test/3.0.0-beta.13)
|
||||
[](https://deps.rs/crate/actix-http-test/3.1.0)
|
||||
[](https://crates.io/crates/actix-http-test)
|
||||
[](https://discord.gg/NWpN5mmg3x)
|
||||
|
||||
## Documentation & Resources
|
||||
|
||||
- [API Documentation](https://docs.rs/actix-http-test)
|
||||
- Minimum Supported Rust Version (MSRV): 1.54
|
||||
- Minimum Supported Rust Version (MSRV): 1.59
|
||||
|
@ -2,8 +2,10 @@
|
||||
|
||||
#![deny(rust_2018_idioms, nonstandard_style)]
|
||||
#![warn(future_incompatible)]
|
||||
#![allow(clippy::uninlined_format_args)]
|
||||
#![doc(html_logo_url = "https://actix.rs/img/logo.png")]
|
||||
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
|
||||
#[cfg(feature = "openssl")]
|
||||
extern crate tls_openssl as openssl;
|
||||
@ -87,6 +89,7 @@ pub async fn test_server_with_addr<F: ServerServiceFactory<TcpStream>>(
|
||||
|
||||
// notify TestServer that server and system have shut down
|
||||
// all thread managed resources should be dropped at this point
|
||||
#[allow(clippy::let_underscore_future)]
|
||||
let _ = thread_stop_tx.send(());
|
||||
});
|
||||
|
||||
@ -294,6 +297,7 @@ impl Drop for TestServer {
|
||||
// without needing to await anything
|
||||
|
||||
// signal server to stop
|
||||
#[allow(clippy::let_underscore_future)]
|
||||
let _ = self.server.stop(true);
|
||||
|
||||
// signal system to stop
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "actix-http"
|
||||
version = "3.1.0"
|
||||
version = "3.3.1"
|
||||
authors = [
|
||||
"Nikolay Kim <fafhrd91@gmail.com>",
|
||||
"Rob Ede <robjtede@icloud.com>",
|
||||
@ -61,14 +61,14 @@ actix-codec = "0.5"
|
||||
actix-utils = "3"
|
||||
actix-rt = { version = "2.2", default-features = false }
|
||||
|
||||
ahash = "0.7"
|
||||
ahash = "0.8"
|
||||
bitflags = "1.2"
|
||||
bytes = "1"
|
||||
bytestring = "1"
|
||||
derive_more = "0.99.5"
|
||||
encoding_rs = "0.8"
|
||||
futures-core = { version = "0.3.7", default-features = false, features = ["alloc"] }
|
||||
http = "0.2.5"
|
||||
futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] }
|
||||
http = "0.2.7"
|
||||
httparse = "1.5.1"
|
||||
httpdate = "1.0.1"
|
||||
itoa = "1"
|
||||
@ -77,6 +77,8 @@ mime = "0.3"
|
||||
percent-encoding = "2.1"
|
||||
pin-project-lite = "0.2"
|
||||
smallvec = "1.6.1"
|
||||
tokio = { version = "1.24.2", features = [] }
|
||||
tokio-util = { version = "0.7", features = ["io", "codec"] }
|
||||
tracing = { version = "0.1.30", default-features = false, features = ["log"] }
|
||||
|
||||
# http2
|
||||
@ -84,7 +86,7 @@ h2 = { version = "0.3.9", optional = true }
|
||||
|
||||
# websockets
|
||||
local-channel = { version = "0.1", optional = true }
|
||||
base64 = { version = "0.13", optional = true }
|
||||
base64 = { version = "0.21", optional = true }
|
||||
rand = { version = "0.8", optional = true }
|
||||
sha1 = { version = "0.10", optional = true }
|
||||
|
||||
@ -94,30 +96,30 @@ actix-tls = { version = "3", default-features = false, optional = true }
|
||||
# compress-*
|
||||
brotli = { version = "3.3.3", optional = true }
|
||||
flate2 = { version = "1.0.13", optional = true }
|
||||
zstd = { version = "0.11", optional = true }
|
||||
zstd = { version = "0.12", optional = true }
|
||||
|
||||
[dev-dependencies]
|
||||
actix-http-test = { version = "3.0.0-beta.13", features = ["openssl"] }
|
||||
actix-http-test = { version = "3", features = ["openssl"] }
|
||||
actix-server = "2"
|
||||
actix-tls = { version = "3", features = ["openssl"] }
|
||||
actix-web = "4"
|
||||
|
||||
async-stream = "0.3"
|
||||
criterion = { version = "0.3", features = ["html_reports"] }
|
||||
criterion = { version = "0.4", features = ["html_reports"] }
|
||||
env_logger = "0.9"
|
||||
futures-util = { version = "0.3.7", default-features = false, features = ["alloc"] }
|
||||
futures-util = { version = "0.3.17", default-features = false, features = ["alloc"] }
|
||||
memchr = "2.4"
|
||||
once_cell = "1.9"
|
||||
rcgen = "0.8"
|
||||
rcgen = "0.9"
|
||||
regex = "1.3"
|
||||
rustversion = "1"
|
||||
rustls-pemfile = "0.2"
|
||||
rustls-pemfile = "1"
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
serde_json = "1.0"
|
||||
static_assertions = "1"
|
||||
tls-openssl = { package = "openssl", version = "0.10.9" }
|
||||
tls-rustls = { package = "rustls", version = "0.20.0" }
|
||||
tokio = { version = "1.8.4", features = ["net", "rt", "macros"] }
|
||||
tokio = { version = "1.24.2", features = ["net", "rt", "macros"] }
|
||||
|
||||
[[example]]
|
||||
name = "ws"
|
||||
|
@ -3,18 +3,18 @@
|
||||
> HTTP primitives for the Actix ecosystem.
|
||||
|
||||
[](https://crates.io/crates/actix-http)
|
||||
[](https://docs.rs/actix-http/3.1.0)
|
||||

|
||||
[](https://docs.rs/actix-http/3.3.1)
|
||||

|
||||

|
||||
<br />
|
||||
[](https://deps.rs/crate/actix-http/3.1.0)
|
||||
[](https://deps.rs/crate/actix-http/3.3.1)
|
||||
[](https://crates.io/crates/actix-http)
|
||||
[](https://discord.gg/NWpN5mmg3x)
|
||||
|
||||
## Documentation & Resources
|
||||
|
||||
- [API Documentation](https://docs.rs/actix-http)
|
||||
- Minimum Supported Rust Version (MSRV): 1.54
|
||||
- Minimum Supported Rust Version (MSRV): 1.59
|
||||
|
||||
## Example
|
||||
|
||||
@ -49,18 +49,3 @@ async fn main() -> io::Result<()> {
|
||||
.await
|
||||
}
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under either of
|
||||
|
||||
- Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0))
|
||||
- MIT license ([LICENSE-MIT](LICENSE-MIT) or [http://opensource.org/licenses/MIT](http://opensource.org/licenses/MIT))
|
||||
|
||||
at your option.
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
Contribution to the actix-http crate is organized under the terms of the
|
||||
Contributor Covenant, the maintainer of actix-http, @fafhrd91, promises to
|
||||
intervene to uphold that code of conduct.
|
||||
|
@ -1,3 +1,5 @@
|
||||
#![allow(clippy::uninlined_format_args)]
|
||||
|
||||
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};
|
||||
|
||||
const CODES: &[u16] = &[0, 1000, 201, 800, 550];
|
||||
|
29
actix-http/examples/h2c-detect.rs
Normal file
29
actix-http/examples/h2c-detect.rs
Normal file
@ -0,0 +1,29 @@
|
||||
//! An example that supports automatic selection of plaintext h1/h2c connections.
|
||||
//!
|
||||
//! Notably, both the following commands will work.
|
||||
//! ```console
|
||||
//! $ curl --http1.1 'http://localhost:8080/'
|
||||
//! $ curl --http2-prior-knowledge 'http://localhost:8080/'
|
||||
//! ```
|
||||
|
||||
use std::{convert::Infallible, io};
|
||||
|
||||
use actix_http::{HttpService, Request, Response, StatusCode};
|
||||
use actix_server::Server;
|
||||
|
||||
#[tokio::main(flavor = "current_thread")]
|
||||
async fn main() -> io::Result<()> {
|
||||
env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));
|
||||
|
||||
Server::build()
|
||||
.bind("h2c-detect", ("127.0.0.1", 8080), || {
|
||||
HttpService::build()
|
||||
.finish(|_req: Request| async move {
|
||||
Ok::<_, Infallible>(Response::build(StatusCode::OK).body("Hello!"))
|
||||
})
|
||||
.tcp_auto_h2c()
|
||||
})?
|
||||
.workers(2)
|
||||
.run()
|
||||
.await
|
||||
}
|
@ -10,13 +10,13 @@ use std::{
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
use actix_codec::Encoder;
|
||||
use actix_http::{body::BodyStream, error::Error, ws, HttpService, Request, Response};
|
||||
use actix_rt::time::{interval, Interval};
|
||||
use actix_server::Server;
|
||||
use bytes::{Bytes, BytesMut};
|
||||
use bytestring::ByteString;
|
||||
use futures_core::{ready, Stream};
|
||||
use tokio_util::codec::Encoder;
|
||||
use tracing::{info, trace};
|
||||
|
||||
#[actix_rt::main]
|
||||
|
@ -120,8 +120,28 @@ pub trait MessageBody {
|
||||
}
|
||||
|
||||
mod foreign_impls {
|
||||
use std::{borrow::Cow, ops::DerefMut};
|
||||
|
||||
use super::*;
|
||||
|
||||
impl<B> MessageBody for &mut B
|
||||
where
|
||||
B: MessageBody + Unpin + ?Sized,
|
||||
{
|
||||
type Error = B::Error;
|
||||
|
||||
fn size(&self) -> BodySize {
|
||||
(**self).size()
|
||||
}
|
||||
|
||||
fn poll_next(
|
||||
mut self: Pin<&mut Self>,
|
||||
cx: &mut Context<'_>,
|
||||
) -> Poll<Option<Result<Bytes, Self::Error>>> {
|
||||
Pin::new(&mut **self).poll_next(cx)
|
||||
}
|
||||
}
|
||||
|
||||
impl MessageBody for Infallible {
|
||||
type Error = Infallible;
|
||||
|
||||
@ -179,8 +199,9 @@ mod foreign_impls {
|
||||
}
|
||||
}
|
||||
|
||||
impl<B> MessageBody for Pin<Box<B>>
|
||||
impl<T, B> MessageBody for Pin<T>
|
||||
where
|
||||
T: DerefMut<Target = B> + Unpin,
|
||||
B: MessageBody + ?Sized,
|
||||
{
|
||||
type Error = B::Error;
|
||||
@ -303,6 +324,39 @@ mod foreign_impls {
|
||||
}
|
||||
}
|
||||
|
||||
impl MessageBody for Cow<'static, [u8]> {
|
||||
type Error = Infallible;
|
||||
|
||||
#[inline]
|
||||
fn size(&self) -> BodySize {
|
||||
BodySize::Sized(self.len() as u64)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn poll_next(
|
||||
self: Pin<&mut Self>,
|
||||
_cx: &mut Context<'_>,
|
||||
) -> Poll<Option<Result<Bytes, Self::Error>>> {
|
||||
if self.is_empty() {
|
||||
Poll::Ready(None)
|
||||
} else {
|
||||
let bytes = match mem::take(self.get_mut()) {
|
||||
Cow::Borrowed(b) => Bytes::from_static(b),
|
||||
Cow::Owned(b) => Bytes::from(b),
|
||||
};
|
||||
Poll::Ready(Some(Ok(bytes)))
|
||||
}
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn try_into_bytes(self) -> Result<Bytes, Self> {
|
||||
match self {
|
||||
Cow::Borrowed(b) => Ok(Bytes::from_static(b)),
|
||||
Cow::Owned(b) => Ok(Bytes::from(b)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl MessageBody for &'static str {
|
||||
type Error = Infallible;
|
||||
|
||||
@ -358,6 +412,39 @@ mod foreign_impls {
|
||||
}
|
||||
}
|
||||
|
||||
impl MessageBody for Cow<'static, str> {
|
||||
type Error = Infallible;
|
||||
|
||||
#[inline]
|
||||
fn size(&self) -> BodySize {
|
||||
BodySize::Sized(self.len() as u64)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn poll_next(
|
||||
self: Pin<&mut Self>,
|
||||
_cx: &mut Context<'_>,
|
||||
) -> Poll<Option<Result<Bytes, Self::Error>>> {
|
||||
if self.is_empty() {
|
||||
Poll::Ready(None)
|
||||
} else {
|
||||
let bytes = match mem::take(self.get_mut()) {
|
||||
Cow::Borrowed(s) => Bytes::from_static(s.as_bytes()),
|
||||
Cow::Owned(s) => Bytes::from(s.into_bytes()),
|
||||
};
|
||||
Poll::Ready(Some(Ok(bytes)))
|
||||
}
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn try_into_bytes(self) -> Result<Bytes, Self> {
|
||||
match self {
|
||||
Cow::Borrowed(s) => Ok(Bytes::from_static(s.as_bytes())),
|
||||
Cow::Owned(s) => Ok(Bytes::from(s.into_bytes())),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl MessageBody for bytestring::ByteString {
|
||||
type Error = Infallible;
|
||||
|
||||
@ -445,6 +532,7 @@ mod tests {
|
||||
use actix_rt::pin;
|
||||
use actix_utils::future::poll_fn;
|
||||
use bytes::{Bytes, BytesMut};
|
||||
use futures_util::stream;
|
||||
|
||||
use super::*;
|
||||
use crate::body::{self, EitherBody};
|
||||
@ -481,6 +569,35 @@ mod tests {
|
||||
assert_poll_next_none!(pl);
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn mut_equivalence() {
|
||||
assert_eq!(().size(), BodySize::Sized(0));
|
||||
assert_eq!(().size(), (&(&mut ())).size());
|
||||
|
||||
let pl = &mut ();
|
||||
pin!(pl);
|
||||
assert_poll_next_none!(pl);
|
||||
|
||||
let pl = &mut Box::new(());
|
||||
pin!(pl);
|
||||
assert_poll_next_none!(pl);
|
||||
|
||||
let mut body = body::SizedStream::new(
|
||||
8,
|
||||
stream::iter([
|
||||
Ok::<_, std::io::Error>(Bytes::from("1234")),
|
||||
Ok(Bytes::from("5678")),
|
||||
]),
|
||||
);
|
||||
let body = &mut body;
|
||||
assert_eq!(body.size(), BodySize::Sized(8));
|
||||
pin!(body);
|
||||
assert_poll_next!(body, Bytes::from_static(b"1234"));
|
||||
assert_poll_next!(body, Bytes::from_static(b"5678"));
|
||||
assert_poll_next_none!(body);
|
||||
}
|
||||
|
||||
#[allow(clippy::let_unit_value)]
|
||||
#[actix_rt::test]
|
||||
async fn test_unit() {
|
||||
let pl = ();
|
||||
@ -606,4 +723,18 @@ mod tests {
|
||||
let not_body = resp_body.downcast_ref::<()>();
|
||||
assert!(not_body.is_none());
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn non_owning_to_bytes() {
|
||||
let mut body = BoxBody::new(());
|
||||
let bytes = body::to_bytes(&mut body).await.unwrap();
|
||||
assert_eq!(bytes, Bytes::new());
|
||||
|
||||
let mut body = body::BodyStream::new(stream::iter([
|
||||
Ok::<_, std::io::Error>(Bytes::from("1234")),
|
||||
Ok(Bytes::from("5678")),
|
||||
]));
|
||||
let bytes = body::to_bytes(&mut body).await.unwrap();
|
||||
assert_eq!(bytes, Bytes::from_static(b"12345678"));
|
||||
}
|
||||
}
|
||||
|
@ -44,7 +44,7 @@ where
|
||||
|
||||
#[inline]
|
||||
fn size(&self) -> BodySize {
|
||||
BodySize::Sized(self.size as u64)
|
||||
BodySize::Sized(self.size)
|
||||
}
|
||||
|
||||
/// Attempts to pull out the next value of the underlying [`Stream`].
|
||||
|
@ -42,7 +42,7 @@ pub async fn to_bytes<B: MessageBody>(body: B) -> Result<Bytes, B::Error> {
|
||||
let body = body.as_mut();
|
||||
|
||||
match ready!(body.poll_next(cx)) {
|
||||
Some(Ok(bytes)) => buf.extend_from_slice(&*bytes),
|
||||
Some(Ok(bytes)) => buf.extend_from_slice(&bytes),
|
||||
None => return Poll::Ready(Ok(())),
|
||||
Some(Err(err)) => return Poll::Ready(Err(err)),
|
||||
}
|
||||
|
@ -186,7 +186,7 @@ where
|
||||
self
|
||||
}
|
||||
|
||||
/// Finish service configuration and create a HTTP Service for HTTP/1 protocol.
|
||||
/// Finish service configuration and create a service for the HTTP/1 protocol.
|
||||
pub fn h1<F, B>(self, service: F) -> H1Service<T, S, B, X, U>
|
||||
where
|
||||
B: MessageBody,
|
||||
@ -209,7 +209,7 @@ where
|
||||
.on_connect_ext(self.on_connect_ext)
|
||||
}
|
||||
|
||||
/// Finish service configuration and create a HTTP service for HTTP/2 protocol.
|
||||
/// Finish service configuration and create a service for the HTTP/2 protocol.
|
||||
#[cfg(feature = "http2")]
|
||||
pub fn h2<F, B>(self, service: F) -> crate::h2::H2Service<T, S, B>
|
||||
where
|
||||
|
@ -35,7 +35,7 @@ impl Default for ServiceConfig {
|
||||
}
|
||||
|
||||
impl ServiceConfig {
|
||||
/// Create instance of `ServiceConfig`
|
||||
/// Create instance of `ServiceConfig`.
|
||||
pub fn new(
|
||||
keep_alive: KeepAlive,
|
||||
client_request_timeout: Duration,
|
||||
|
@ -257,7 +257,7 @@ fn update_head(encoding: ContentEncoding, head: &mut ResponseHead) {
|
||||
head.headers_mut()
|
||||
.insert(header::CONTENT_ENCODING, encoding.to_header_value());
|
||||
head.headers_mut()
|
||||
.insert(header::VARY, HeaderValue::from_static("accept-encoding"));
|
||||
.append(header::VARY, HeaderValue::from_static("accept-encoding"));
|
||||
|
||||
head.no_chunking(false);
|
||||
}
|
||||
|
@ -388,7 +388,7 @@ impl StdError for DispatchError {
|
||||
|
||||
/// A set of error that can occur during parsing content type.
|
||||
#[derive(Debug, Display, Error)]
|
||||
#[cfg_attr(test, derive(PartialEq))]
|
||||
#[cfg_attr(test, derive(PartialEq, Eq))]
|
||||
#[non_exhaustive]
|
||||
pub enum ContentTypeError {
|
||||
/// Can not parse content type
|
||||
|
@ -1,9 +1,30 @@
|
||||
use std::{
|
||||
any::{Any, TypeId},
|
||||
collections::HashMap,
|
||||
fmt,
|
||||
hash::{BuildHasherDefault, Hasher},
|
||||
};
|
||||
|
||||
use ahash::AHashMap;
|
||||
/// A hasher for `TypeId`s that takes advantage of its known characteristics.
|
||||
///
|
||||
/// Author of `anymap` crate has done research on the topic:
|
||||
/// https://github.com/chris-morgan/anymap/blob/2e9a5704/src/lib.rs#L599
|
||||
#[derive(Debug, Default)]
|
||||
struct NoOpHasher(u64);
|
||||
|
||||
impl Hasher for NoOpHasher {
|
||||
fn write(&mut self, _bytes: &[u8]) {
|
||||
unimplemented!("This NoOpHasher can only handle u64s")
|
||||
}
|
||||
|
||||
fn write_u64(&mut self, i: u64) {
|
||||
self.0 = i;
|
||||
}
|
||||
|
||||
fn finish(&self) -> u64 {
|
||||
self.0
|
||||
}
|
||||
}
|
||||
|
||||
/// A type map for request extensions.
|
||||
///
|
||||
@ -11,7 +32,7 @@ use ahash::AHashMap;
|
||||
#[derive(Default)]
|
||||
pub struct Extensions {
|
||||
/// Use AHasher with a std HashMap with for faster lookups on the small `TypeId` keys.
|
||||
map: AHashMap<TypeId, Box<dyn Any>>,
|
||||
map: HashMap<TypeId, Box<dyn Any>, BuildHasherDefault<NoOpHasher>>,
|
||||
}
|
||||
|
||||
impl Extensions {
|
||||
@ -19,7 +40,7 @@ impl Extensions {
|
||||
#[inline]
|
||||
pub fn new() -> Extensions {
|
||||
Extensions {
|
||||
map: AHashMap::new(),
|
||||
map: HashMap::default(),
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -15,7 +15,7 @@ macro_rules! byte (
|
||||
})
|
||||
);
|
||||
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub(super) enum ChunkedState {
|
||||
Size,
|
||||
SizeLws,
|
||||
@ -71,7 +71,7 @@ impl ChunkedState {
|
||||
|
||||
match size.checked_mul(radix) {
|
||||
Some(n) => {
|
||||
*size = n as u64;
|
||||
*size = n;
|
||||
*size += rem as u64;
|
||||
|
||||
Poll::Ready(Ok(ChunkedState::Size))
|
||||
|
@ -1,9 +1,9 @@
|
||||
use std::{fmt, io};
|
||||
|
||||
use actix_codec::{Decoder, Encoder};
|
||||
use bitflags::bitflags;
|
||||
use bytes::{Bytes, BytesMut};
|
||||
use http::{Method, Version};
|
||||
use tokio_util::codec::{Decoder, Encoder};
|
||||
|
||||
use super::{
|
||||
decoder::{self, PayloadDecoder, PayloadItem, PayloadType},
|
||||
|
@ -1,9 +1,9 @@
|
||||
use std::{fmt, io};
|
||||
|
||||
use actix_codec::{Decoder, Encoder};
|
||||
use bitflags::bitflags;
|
||||
use bytes::BytesMut;
|
||||
use http::{Method, Version};
|
||||
use tokio_util::codec::{Decoder, Encoder};
|
||||
|
||||
use super::{
|
||||
decoder::{self, PayloadDecoder, PayloadItem, PayloadType},
|
||||
|
@ -46,6 +46,23 @@ pub(crate) enum PayloadLength {
|
||||
None,
|
||||
}
|
||||
|
||||
impl PayloadLength {
|
||||
/// Returns true if variant is `None`.
|
||||
fn is_none(&self) -> bool {
|
||||
matches!(self, Self::None)
|
||||
}
|
||||
|
||||
/// Returns true if variant is represents zero-length (not none) payload.
|
||||
fn is_zero(&self) -> bool {
|
||||
matches!(
|
||||
self,
|
||||
PayloadLength::Payload(PayloadType::Payload(PayloadDecoder {
|
||||
kind: Kind::Length(0)
|
||||
}))
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) trait MessageType: Sized {
|
||||
fn set_connection_type(&mut self, conn_type: Option<ConnectionType>);
|
||||
|
||||
@ -59,6 +76,7 @@ pub(crate) trait MessageType: Sized {
|
||||
&mut self,
|
||||
slice: &Bytes,
|
||||
raw_headers: &[HeaderIndex],
|
||||
version: Version,
|
||||
) -> Result<PayloadLength, ParseError> {
|
||||
let mut ka = None;
|
||||
let mut has_upgrade_websocket = false;
|
||||
@ -87,21 +105,23 @@ pub(crate) trait MessageType: Sized {
|
||||
return Err(ParseError::Header);
|
||||
}
|
||||
|
||||
header::CONTENT_LENGTH => match value.to_str() {
|
||||
Ok(s) if s.trim().starts_with('+') => {
|
||||
debug!("illegal Content-Length: {:?}", s);
|
||||
header::CONTENT_LENGTH => match value.to_str().map(str::trim) {
|
||||
Ok(val) if val.starts_with('+') => {
|
||||
debug!("illegal Content-Length: {:?}", val);
|
||||
return Err(ParseError::Header);
|
||||
}
|
||||
Ok(s) => {
|
||||
if let Ok(len) = s.parse::<u64>() {
|
||||
if len != 0 {
|
||||
content_length = Some(len);
|
||||
}
|
||||
|
||||
Ok(val) => {
|
||||
if let Ok(len) = val.parse::<u64>() {
|
||||
// accept 0 lengths here and remove them in `decode` after all
|
||||
// headers have been processed to prevent request smuggling issues
|
||||
content_length = Some(len);
|
||||
} else {
|
||||
debug!("illegal Content-Length: {:?}", s);
|
||||
debug!("illegal Content-Length: {:?}", val);
|
||||
return Err(ParseError::Header);
|
||||
}
|
||||
}
|
||||
|
||||
Err(_) => {
|
||||
debug!("illegal Content-Length: {:?}", value);
|
||||
return Err(ParseError::Header);
|
||||
@ -114,22 +134,23 @@ pub(crate) trait MessageType: Sized {
|
||||
return Err(ParseError::Header);
|
||||
}
|
||||
|
||||
header::TRANSFER_ENCODING => {
|
||||
header::TRANSFER_ENCODING if version == Version::HTTP_11 => {
|
||||
seen_te = true;
|
||||
|
||||
if let Ok(s) = value.to_str().map(str::trim) {
|
||||
if s.eq_ignore_ascii_case("chunked") {
|
||||
if let Ok(val) = value.to_str().map(str::trim) {
|
||||
if val.eq_ignore_ascii_case("chunked") {
|
||||
chunked = true;
|
||||
} else if s.eq_ignore_ascii_case("identity") {
|
||||
} else if val.eq_ignore_ascii_case("identity") {
|
||||
// allow silently since multiple TE headers are already checked
|
||||
} else {
|
||||
debug!("illegal Transfer-Encoding: {:?}", s);
|
||||
debug!("illegal Transfer-Encoding: {:?}", val);
|
||||
return Err(ParseError::Header);
|
||||
}
|
||||
} else {
|
||||
return Err(ParseError::Header);
|
||||
}
|
||||
}
|
||||
|
||||
// connection keep-alive state
|
||||
header::CONNECTION => {
|
||||
ka = if let Ok(conn) = value.to_str().map(str::trim) {
|
||||
@ -146,6 +167,7 @@ pub(crate) trait MessageType: Sized {
|
||||
None
|
||||
};
|
||||
}
|
||||
|
||||
header::UPGRADE => {
|
||||
if let Ok(val) = value.to_str().map(str::trim) {
|
||||
if val.eq_ignore_ascii_case("websocket") {
|
||||
@ -153,19 +175,23 @@ pub(crate) trait MessageType: Sized {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
header::EXPECT => {
|
||||
let bytes = value.as_bytes();
|
||||
if bytes.len() >= 4 && &bytes[0..4] == b"100-" {
|
||||
expect = true;
|
||||
}
|
||||
}
|
||||
|
||||
_ => {}
|
||||
}
|
||||
|
||||
headers.append(name, value);
|
||||
}
|
||||
}
|
||||
|
||||
self.set_connection_type(ka);
|
||||
|
||||
if expect {
|
||||
self.set_expect()
|
||||
}
|
||||
@ -249,7 +275,22 @@ impl MessageType for Request {
|
||||
let mut msg = Request::new();
|
||||
|
||||
// convert headers
|
||||
let length = msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len])?;
|
||||
let mut length =
|
||||
msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len], ver)?;
|
||||
|
||||
// disallow HTTP/1.0 POST requests that do not contain a Content-Length headers
|
||||
// see https://datatracker.ietf.org/doc/html/rfc1945#section-7.2.2
|
||||
if ver == Version::HTTP_10 && method == Method::POST && length.is_none() {
|
||||
debug!("no Content-Length specified for HTTP/1.0 POST request");
|
||||
return Err(ParseError::Header);
|
||||
}
|
||||
|
||||
// Remove CL value if 0 now that all headers and HTTP/1.0 special cases are processed.
|
||||
// Protects against some request smuggling attacks.
|
||||
// See https://github.com/actix/actix-web/issues/2767.
|
||||
if length.is_zero() {
|
||||
length = PayloadLength::None;
|
||||
}
|
||||
|
||||
// payload decoder
|
||||
let decoder = match length {
|
||||
@ -337,7 +378,15 @@ impl MessageType for ResponseHead {
|
||||
msg.version = ver;
|
||||
|
||||
// convert headers
|
||||
let length = msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len])?;
|
||||
let mut length =
|
||||
msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len], ver)?;
|
||||
|
||||
// Remove CL value if 0 now that all headers and HTTP/1.0 special cases are processed.
|
||||
// Protects against some request smuggling attacks.
|
||||
// See https://github.com/actix/actix-web/issues/2767.
|
||||
if length.is_zero() {
|
||||
length = PayloadLength::None;
|
||||
}
|
||||
|
||||
// message payload
|
||||
let decoder = if let PayloadLength::Payload(pl) = length {
|
||||
@ -391,7 +440,7 @@ impl HeaderIndex {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
/// Chunk type yielded while decoding a payload.
|
||||
pub enum PayloadItem {
|
||||
Chunk(Bytes),
|
||||
@ -401,7 +450,7 @@ pub enum PayloadItem {
|
||||
/// Decoder that can handle different payload types.
|
||||
///
|
||||
/// If a message body does not use `Transfer-Encoding`, it should include a `Content-Length`.
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct PayloadDecoder {
|
||||
kind: Kind,
|
||||
}
|
||||
@ -427,7 +476,7 @@ impl PayloadDecoder {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
enum Kind {
|
||||
/// A reader used when a `Content-Length` header is passed with a positive integer.
|
||||
Length(u64),
|
||||
@ -606,14 +655,100 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_parse_post() {
|
||||
let mut buf = BytesMut::from("POST /test2 HTTP/1.0\r\n\r\n");
|
||||
fn parse_h09_reject() {
|
||||
let mut buf = BytesMut::from(
|
||||
"GET /test1 HTTP/0.9\r\n\
|
||||
\r\n",
|
||||
);
|
||||
|
||||
let mut reader = MessageDecoder::<Request>::default();
|
||||
reader.decode(&mut buf).unwrap_err();
|
||||
|
||||
let mut buf = BytesMut::from(
|
||||
"POST /test2 HTTP/0.9\r\n\
|
||||
Content-Length: 3\r\n\
|
||||
\r\n
|
||||
abc",
|
||||
);
|
||||
|
||||
let mut reader = MessageDecoder::<Request>::default();
|
||||
reader.decode(&mut buf).unwrap_err();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_h10_get() {
|
||||
let mut buf = BytesMut::from(
|
||||
"GET /test1 HTTP/1.0\r\n\
|
||||
\r\n",
|
||||
);
|
||||
|
||||
let mut reader = MessageDecoder::<Request>::default();
|
||||
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
|
||||
assert_eq!(req.version(), Version::HTTP_10);
|
||||
assert_eq!(*req.method(), Method::GET);
|
||||
assert_eq!(req.path(), "/test1");
|
||||
|
||||
let mut buf = BytesMut::from(
|
||||
"GET /test2 HTTP/1.0\r\n\
|
||||
Content-Length: 0\r\n\
|
||||
\r\n",
|
||||
);
|
||||
|
||||
let mut reader = MessageDecoder::<Request>::default();
|
||||
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
|
||||
assert_eq!(req.version(), Version::HTTP_10);
|
||||
assert_eq!(*req.method(), Method::GET);
|
||||
assert_eq!(req.path(), "/test2");
|
||||
|
||||
let mut buf = BytesMut::from(
|
||||
"GET /test3 HTTP/1.0\r\n\
|
||||
Content-Length: 3\r\n\
|
||||
\r\n
|
||||
abc",
|
||||
);
|
||||
|
||||
let mut reader = MessageDecoder::<Request>::default();
|
||||
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
|
||||
assert_eq!(req.version(), Version::HTTP_10);
|
||||
assert_eq!(*req.method(), Method::GET);
|
||||
assert_eq!(req.path(), "/test3");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_h10_post() {
|
||||
let mut buf = BytesMut::from(
|
||||
"POST /test1 HTTP/1.0\r\n\
|
||||
Content-Length: 3\r\n\
|
||||
\r\n\
|
||||
abc",
|
||||
);
|
||||
|
||||
let mut reader = MessageDecoder::<Request>::default();
|
||||
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
|
||||
assert_eq!(req.version(), Version::HTTP_10);
|
||||
assert_eq!(*req.method(), Method::POST);
|
||||
assert_eq!(req.path(), "/test1");
|
||||
|
||||
let mut buf = BytesMut::from(
|
||||
"POST /test2 HTTP/1.0\r\n\
|
||||
Content-Length: 0\r\n\
|
||||
\r\n",
|
||||
);
|
||||
|
||||
let mut reader = MessageDecoder::<Request>::default();
|
||||
let (req, _) = reader.decode(&mut buf).unwrap().unwrap();
|
||||
assert_eq!(req.version(), Version::HTTP_10);
|
||||
assert_eq!(*req.method(), Method::POST);
|
||||
assert_eq!(req.path(), "/test2");
|
||||
|
||||
let mut buf = BytesMut::from(
|
||||
"POST /test3 HTTP/1.0\r\n\
|
||||
\r\n",
|
||||
);
|
||||
|
||||
let mut reader = MessageDecoder::<Request>::default();
|
||||
let err = reader.decode(&mut buf).unwrap_err();
|
||||
assert!(err.to_string().contains("Header"))
|
||||
}
|
||||
|
||||
#[test]
|
||||
@ -709,121 +844,98 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_conn_default_1_0() {
|
||||
let mut buf = BytesMut::from("GET /test HTTP/1.0\r\n\r\n");
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
let req = parse_ready!(&mut BytesMut::from("GET /test HTTP/1.0\r\n\r\n"));
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::Close);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_conn_default_1_1() {
|
||||
let mut buf = BytesMut::from("GET /test HTTP/1.1\r\n\r\n");
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
let req = parse_ready!(&mut BytesMut::from("GET /test HTTP/1.1\r\n\r\n"));
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_conn_close() {
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
connection: close\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
));
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::Close);
|
||||
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
connection: Close\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
));
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::Close);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_conn_close_1_0() {
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.0\r\n\
|
||||
connection: close\r\n\r\n",
|
||||
);
|
||||
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
));
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::Close);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_conn_keep_alive_1_0() {
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.0\r\n\
|
||||
connection: keep-alive\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
));
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
|
||||
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.0\r\n\
|
||||
connection: Keep-Alive\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
));
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_conn_keep_alive_1_1() {
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
connection: keep-alive\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
));
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_conn_other_1_0() {
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.0\r\n\
|
||||
connection: other\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
));
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::Close);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_conn_other_1_1() {
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
connection: other\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
));
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_conn_upgrade() {
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
upgrade: websockets\r\n\
|
||||
connection: upgrade\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
));
|
||||
|
||||
assert!(req.upgrade());
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::Upgrade);
|
||||
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
upgrade: Websockets\r\n\
|
||||
connection: Upgrade\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
));
|
||||
|
||||
assert!(req.upgrade());
|
||||
assert_eq!(req.head().connection_type(), ConnectionType::Upgrade);
|
||||
@ -831,59 +943,62 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_conn_upgrade_connect_method() {
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"CONNECT /test HTTP/1.1\r\n\
|
||||
content-type: text/plain\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
));
|
||||
|
||||
assert!(req.upgrade());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_headers_content_length_err_1() {
|
||||
let mut buf = BytesMut::from(
|
||||
fn test_headers_bad_content_length() {
|
||||
// string CL
|
||||
expect_parse_err!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
content-length: line\r\n\r\n",
|
||||
);
|
||||
));
|
||||
|
||||
expect_parse_err!(&mut buf)
|
||||
// negative CL
|
||||
expect_parse_err!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
content-length: -1\r\n\r\n",
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_headers_content_length_err_2() {
|
||||
fn octal_ish_cl_parsed_as_decimal() {
|
||||
let mut buf = BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
content-length: -1\r\n\r\n",
|
||||
"POST /test HTTP/1.1\r\n\
|
||||
content-length: 011\r\n\r\n",
|
||||
);
|
||||
|
||||
expect_parse_err!(&mut buf);
|
||||
let mut reader = MessageDecoder::<Request>::default();
|
||||
let (_req, pl) = reader.decode(&mut buf).unwrap().unwrap();
|
||||
assert!(matches!(
|
||||
pl,
|
||||
PayloadType::Payload(pl) if pl == PayloadDecoder::length(11)
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_invalid_header() {
|
||||
let mut buf = BytesMut::from(
|
||||
expect_parse_err!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
test line\r\n\r\n",
|
||||
);
|
||||
|
||||
expect_parse_err!(&mut buf);
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_invalid_name() {
|
||||
let mut buf = BytesMut::from(
|
||||
expect_parse_err!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
test[]: line\r\n\r\n",
|
||||
);
|
||||
|
||||
expect_parse_err!(&mut buf);
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_http_request_bad_status_line() {
|
||||
let mut buf = BytesMut::from("getpath \r\n\r\n");
|
||||
expect_parse_err!(&mut buf);
|
||||
expect_parse_err!(&mut BytesMut::from("getpath \r\n\r\n"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@ -923,11 +1038,10 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_http_request_parser_utf8() {
|
||||
let mut buf = BytesMut::from(
|
||||
let req = parse_ready!(&mut BytesMut::from(
|
||||
"GET /test HTTP/1.1\r\n\
|
||||
x-test: тест\r\n\r\n",
|
||||
);
|
||||
let req = parse_ready!(&mut buf);
|
||||
));
|
||||
|
||||
assert_eq!(
|
||||
req.headers().get("x-test").unwrap().as_bytes(),
|
||||
@ -937,24 +1051,18 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_http_request_parser_two_slashes() {
|
||||
let mut buf = BytesMut::from("GET //path HTTP/1.1\r\n\r\n");
|
||||
let req = parse_ready!(&mut buf);
|
||||
|
||||
let req = parse_ready!(&mut BytesMut::from("GET //path HTTP/1.1\r\n\r\n"));
|
||||
assert_eq!(req.path(), "//path");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_http_request_parser_bad_method() {
|
||||
let mut buf = BytesMut::from("!12%()+=~$ /get HTTP/1.1\r\n\r\n");
|
||||
|
||||
expect_parse_err!(&mut buf);
|
||||
expect_parse_err!(&mut BytesMut::from("!12%()+=~$ /get HTTP/1.1\r\n\r\n"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_http_request_parser_bad_version() {
|
||||
let mut buf = BytesMut::from("GET //get HT/11\r\n\r\n");
|
||||
|
||||
expect_parse_err!(&mut buf);
|
||||
expect_parse_err!(&mut BytesMut::from("GET //get HT/11\r\n\r\n"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@ -971,29 +1079,66 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn hrs_multiple_content_length() {
|
||||
let mut buf = BytesMut::from(
|
||||
expect_parse_err!(&mut BytesMut::from(
|
||||
"GET / HTTP/1.1\r\n\
|
||||
Host: example.com\r\n\
|
||||
Content-Length: 4\r\n\
|
||||
Content-Length: 2\r\n\
|
||||
\r\n\
|
||||
abcd",
|
||||
);
|
||||
));
|
||||
|
||||
expect_parse_err!(&mut buf);
|
||||
expect_parse_err!(&mut BytesMut::from(
|
||||
"GET / HTTP/1.1\r\n\
|
||||
Host: example.com\r\n\
|
||||
Content-Length: 0\r\n\
|
||||
Content-Length: 2\r\n\
|
||||
\r\n\
|
||||
ab",
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn hrs_content_length_plus() {
|
||||
let mut buf = BytesMut::from(
|
||||
expect_parse_err!(&mut BytesMut::from(
|
||||
"GET / HTTP/1.1\r\n\
|
||||
Host: example.com\r\n\
|
||||
Content-Length: +3\r\n\
|
||||
\r\n\
|
||||
000",
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn hrs_te_http10() {
|
||||
// in HTTP/1.0 transfer encoding is ignored and must therefore contain a CL header
|
||||
|
||||
expect_parse_err!(&mut BytesMut::from(
|
||||
"POST / HTTP/1.0\r\n\
|
||||
Host: example.com\r\n\
|
||||
Transfer-Encoding: chunked\r\n\
|
||||
\r\n\
|
||||
3\r\n\
|
||||
aaa\r\n\
|
||||
0\r\n\
|
||||
",
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn hrs_cl_and_te_http10() {
|
||||
// in HTTP/1.0 transfer encoding is simply ignored so it's fine to have both
|
||||
|
||||
let mut buf = BytesMut::from(
|
||||
"GET / HTTP/1.0\r\n\
|
||||
Host: example.com\r\n\
|
||||
Content-Length: 3\r\n\
|
||||
Transfer-Encoding: chunked\r\n\
|
||||
\r\n\
|
||||
000",
|
||||
);
|
||||
|
||||
expect_parse_err!(&mut buf);
|
||||
parse_ready!(&mut buf);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
@ -8,13 +8,15 @@ use std::{
|
||||
task::{Context, Poll},
|
||||
};
|
||||
|
||||
use actix_codec::{AsyncRead, AsyncWrite, Decoder as _, Encoder as _, Framed, FramedParts};
|
||||
use actix_codec::{Framed, FramedParts};
|
||||
use actix_rt::time::sleep_until;
|
||||
use actix_service::Service;
|
||||
use bitflags::bitflags;
|
||||
use bytes::{Buf, BytesMut};
|
||||
use futures_core::ready;
|
||||
use pin_project_lite::pin_project;
|
||||
use tokio::io::{AsyncRead, AsyncWrite};
|
||||
use tokio_util::codec::{Decoder as _, Encoder as _};
|
||||
use tracing::{error, trace};
|
||||
|
||||
use crate::{
|
||||
@ -976,9 +978,11 @@ where
|
||||
//
|
||||
// A Request head too large to parse is only checked on `httparse::Status::Partial`.
|
||||
|
||||
if this.payload.is_none() {
|
||||
// When dispatcher has a payload the responsibility of wake up it would be shift
|
||||
// to h1::payload::Payload.
|
||||
match this.payload {
|
||||
// When dispatcher has a payload the responsibility of wake ups is shifted to
|
||||
// `h1::payload::Payload` unless the payload is needing a read, in which case it
|
||||
// might not have access to the waker and could result in the dispatcher
|
||||
// getting stuck until timeout.
|
||||
//
|
||||
// Reason:
|
||||
// Self wake up when there is payload would waste poll and/or result in
|
||||
@ -989,7 +993,8 @@ where
|
||||
// read anymore. At this case read_buf could always remain beyond
|
||||
// MAX_BUFFER_SIZE and self wake up would be busy poll dispatcher and
|
||||
// waste resources.
|
||||
cx.waker().wake_by_ref();
|
||||
Some(ref p) if p.need_read(cx) != PayloadStatus::Read => {}
|
||||
_ => cx.waker().wake_by_ref(),
|
||||
}
|
||||
|
||||
return Ok(false);
|
||||
@ -1001,7 +1006,7 @@ where
|
||||
this.read_buf.reserve(HW_BUFFER_SIZE - remaining);
|
||||
}
|
||||
|
||||
match actix_codec::poll_read_buf(io.as_mut(), cx, this.read_buf) {
|
||||
match tokio_util::io::poll_read_buf(io.as_mut(), cx, this.read_buf) {
|
||||
Poll::Ready(Ok(n)) => {
|
||||
this.flags.remove(Flags::FINISHED);
|
||||
|
||||
|
@ -64,7 +64,7 @@ fn drop_payload_service(
|
||||
fn echo_payload_service() -> impl Service<Request, Response = Response<Bytes>, Error = Error> {
|
||||
fn_service(|mut req: Request| {
|
||||
Box::pin(async move {
|
||||
use futures_util::stream::StreamExt as _;
|
||||
use futures_util::StreamExt as _;
|
||||
|
||||
let mut pl = req.take_payload();
|
||||
let mut body = BytesMut::new();
|
||||
@ -637,7 +637,7 @@ async fn expect_handling() {
|
||||
|
||||
if let DispatcherState::Normal { ref inner } = h1.inner {
|
||||
let io = inner.io.as_ref().unwrap();
|
||||
let mut res = (&io.write_buf()[..]).to_owned();
|
||||
let mut res = io.write_buf()[..].to_owned();
|
||||
stabilize_date_header(&mut res);
|
||||
|
||||
assert_eq!(
|
||||
@ -699,7 +699,7 @@ async fn expect_eager() {
|
||||
|
||||
if let DispatcherState::Normal { ref inner } = h1.inner {
|
||||
let io = inner.io.as_ref().unwrap();
|
||||
let mut res = (&io.write_buf()[..]).to_owned();
|
||||
let mut res = io.write_buf()[..].to_owned();
|
||||
stabilize_date_header(&mut res);
|
||||
|
||||
// Despite the content-length header and even though the request payload has not
|
||||
@ -932,7 +932,6 @@ fn http_msg(msg: impl AsRef<str>) -> BytesMut {
|
||||
.as_ref()
|
||||
.trim()
|
||||
.split('\n')
|
||||
.into_iter()
|
||||
.map(|line| [line.trim_start(), "\r"].concat())
|
||||
.collect::<Vec<_>>()
|
||||
.join("\n");
|
||||
|
@ -450,7 +450,7 @@ impl TransferEncoding {
|
||||
|
||||
buf.extend_from_slice(&msg[..len as usize]);
|
||||
|
||||
*remaining -= len as u64;
|
||||
*remaining -= len;
|
||||
Ok(*remaining == 0)
|
||||
} else {
|
||||
Ok(true)
|
||||
|
@ -16,7 +16,7 @@ use crate::error::PayloadError;
|
||||
/// max buffer size 32k
|
||||
pub(crate) const MAX_BUFFER_SIZE: usize = 32_768;
|
||||
|
||||
#[derive(Debug, PartialEq)]
|
||||
#[derive(Debug, PartialEq, Eq)]
|
||||
pub enum PayloadStatus {
|
||||
Read,
|
||||
Pause,
|
||||
@ -252,19 +252,15 @@ impl Inner {
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::panic::{RefUnwindSafe, UnwindSafe};
|
||||
|
||||
use actix_utils::future::poll_fn;
|
||||
use static_assertions::{assert_impl_all, assert_not_impl_any};
|
||||
|
||||
use super::*;
|
||||
|
||||
assert_impl_all!(Payload: Unpin);
|
||||
assert_not_impl_any!(Payload: Send, Sync, UnwindSafe, RefUnwindSafe);
|
||||
assert_not_impl_any!(Payload: Send, Sync);
|
||||
|
||||
assert_impl_all!(Inner: Unpin, Send, Sync);
|
||||
// assertion not stable wrt rustc versions yet
|
||||
// assert_impl_all!(Inner: UnwindSafe, RefUnwindSafe);
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_unread_data() {
|
||||
|
@ -29,7 +29,7 @@ use crate::{
|
||||
HeaderName, HeaderValue, CONNECTION, CONTENT_LENGTH, DATE, TRANSFER_ENCODING, UPGRADE,
|
||||
},
|
||||
service::HttpFlow,
|
||||
Extensions, OnConnectData, Payload, Request, Response, ResponseHead,
|
||||
Extensions, Method, OnConnectData, Payload, Request, Response, ResponseHead,
|
||||
};
|
||||
|
||||
const CHUNK_SIZE: usize = 16_384;
|
||||
@ -67,7 +67,7 @@ where
|
||||
timer
|
||||
})
|
||||
.unwrap_or_else(|| Box::pin(sleep(dur))),
|
||||
on_flight: false,
|
||||
in_flight: false,
|
||||
ping_pong: conn.ping_pong().unwrap(),
|
||||
});
|
||||
|
||||
@ -84,9 +84,14 @@ where
|
||||
}
|
||||
|
||||
struct H2PingPong {
|
||||
timer: Pin<Box<Sleep>>,
|
||||
on_flight: bool,
|
||||
/// Handle to send ping frames from the peer.
|
||||
ping_pong: PingPong,
|
||||
|
||||
/// True when a ping has been sent and is waiting for a reply.
|
||||
in_flight: bool,
|
||||
|
||||
/// Timeout for pong response.
|
||||
timer: Pin<Box<Sleep>>,
|
||||
}
|
||||
|
||||
impl<T, S, B, X, U> Future for Dispatcher<T, S, B, X, U>
|
||||
@ -113,6 +118,7 @@ where
|
||||
let payload = crate::h2::Payload::new(body);
|
||||
let pl = Payload::H2 { payload };
|
||||
let mut req = Request::with_payload(pl);
|
||||
let head_req = parts.method == Method::HEAD;
|
||||
|
||||
let head = req.head_mut();
|
||||
head.uri = parts.uri;
|
||||
@ -130,10 +136,10 @@ where
|
||||
actix_rt::spawn(async move {
|
||||
// resolve service call and send response.
|
||||
let res = match fut.await {
|
||||
Ok(res) => handle_response(res.into(), tx, config).await,
|
||||
Ok(res) => handle_response(res.into(), tx, config, head_req).await,
|
||||
Err(err) => {
|
||||
let res: Response<BoxBody> = err.into();
|
||||
handle_response(res, tx, config).await
|
||||
handle_response(res, tx, config, head_req).await
|
||||
}
|
||||
};
|
||||
|
||||
@ -152,26 +158,28 @@ where
|
||||
});
|
||||
}
|
||||
Poll::Ready(None) => return Poll::Ready(Ok(())),
|
||||
|
||||
Poll::Pending => match this.ping_pong.as_mut() {
|
||||
Some(ping_pong) => loop {
|
||||
if ping_pong.on_flight {
|
||||
// When have on flight ping pong. poll pong and and keep alive timer.
|
||||
// on success pong received update keep alive timer to determine the next timing of
|
||||
// ping pong.
|
||||
if ping_pong.in_flight {
|
||||
// When there is an in-flight ping-pong, poll pong and and keep-alive
|
||||
// timer. On successful pong received, update keep-alive timer to
|
||||
// determine the next timing of ping pong.
|
||||
match ping_pong.ping_pong.poll_pong(cx)? {
|
||||
Poll::Ready(_) => {
|
||||
ping_pong.on_flight = false;
|
||||
ping_pong.in_flight = false;
|
||||
|
||||
let dead_line = this.config.keep_alive_deadline().unwrap();
|
||||
ping_pong.timer.as_mut().reset(dead_line.into());
|
||||
}
|
||||
Poll::Pending => {
|
||||
return ping_pong.timer.as_mut().poll(cx).map(|_| Ok(()))
|
||||
return ping_pong.timer.as_mut().poll(cx).map(|_| Ok(()));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// When there is no on flight ping pong. keep alive timer is used to wait for next
|
||||
// timing of ping pong. Therefore at this point it serves as an interval instead.
|
||||
// When there is no in-flight ping-pong, keep-alive timer is used to
|
||||
// wait for next timing of ping-pong. Therefore, at this point it serves
|
||||
// as an interval instead.
|
||||
ready!(ping_pong.timer.as_mut().poll(cx));
|
||||
|
||||
ping_pong.ping_pong.send_ping(Ping::opaque())?;
|
||||
@ -179,7 +187,7 @@ where
|
||||
let dead_line = this.config.keep_alive_deadline().unwrap();
|
||||
ping_pong.timer.as_mut().reset(dead_line.into());
|
||||
|
||||
ping_pong.on_flight = true;
|
||||
ping_pong.in_flight = true;
|
||||
}
|
||||
},
|
||||
None => return Poll::Pending,
|
||||
@ -199,6 +207,7 @@ async fn handle_response<B>(
|
||||
res: Response<B>,
|
||||
mut tx: SendResponse<Bytes>,
|
||||
config: ServiceConfig,
|
||||
head_req: bool,
|
||||
) -> Result<(), DispatchError>
|
||||
where
|
||||
B: MessageBody,
|
||||
@ -208,14 +217,14 @@ where
|
||||
// prepare response.
|
||||
let mut size = body.size();
|
||||
let res = prepare_response(config, res.head(), &mut size);
|
||||
let eof = size.is_eof();
|
||||
let eof_or_head = size.is_eof() || head_req;
|
||||
|
||||
// send response head and return on eof.
|
||||
let mut stream = tx
|
||||
.send_response(res, eof)
|
||||
.send_response(res, eof_or_head)
|
||||
.map_err(DispatchError::SendResponse)?;
|
||||
|
||||
if eof {
|
||||
if eof_or_head {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
@ -287,13 +296,13 @@ fn prepare_response(
|
||||
_ => {}
|
||||
}
|
||||
|
||||
let _ = match size {
|
||||
BodySize::None | BodySize::Stream => None,
|
||||
match size {
|
||||
BodySize::None | BodySize::Stream => {}
|
||||
|
||||
BodySize::Sized(0) => {
|
||||
#[allow(clippy::declare_interior_mutable_const)]
|
||||
const HV_ZERO: HeaderValue = HeaderValue::from_static("0");
|
||||
res.headers_mut().insert(CONTENT_LENGTH, HV_ZERO)
|
||||
res.headers_mut().insert(CONTENT_LENGTH, HV_ZERO);
|
||||
}
|
||||
|
||||
BodySize::Sized(len) => {
|
||||
@ -302,7 +311,7 @@ fn prepare_response(
|
||||
res.headers_mut().insert(
|
||||
CONTENT_LENGTH,
|
||||
HeaderValue::from_str(buf.format(*len)).unwrap(),
|
||||
)
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -103,11 +103,9 @@ where
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::panic::{RefUnwindSafe, UnwindSafe};
|
||||
|
||||
use static_assertions::assert_impl_all;
|
||||
|
||||
use super::*;
|
||||
|
||||
assert_impl_all!(Payload: Unpin, Send, Sync, UnwindSafe, RefUnwindSafe);
|
||||
assert_impl_all!(Payload: Unpin, Send, Sync);
|
||||
}
|
||||
|
53
actix-http/src/header/common.rs
Normal file
53
actix-http/src/header/common.rs
Normal file
@ -0,0 +1,53 @@
|
||||
//! Common header names not defined in [`http`].
|
||||
//!
|
||||
//! Any headers added to this file will need to be re-exported from the list at `crate::headers`.
|
||||
|
||||
use http::header::HeaderName;
|
||||
|
||||
/// Response header field that indicates how caches have handled that response and its corresponding
|
||||
/// request.
|
||||
///
|
||||
/// See [RFC 9211](https://www.rfc-editor.org/rfc/rfc9211) for full semantics.
|
||||
// TODO(breaking): replace with http's version
|
||||
pub const CACHE_STATUS: HeaderName = HeaderName::from_static("cache-status");
|
||||
|
||||
/// Response header field that allows origin servers to control the behavior of CDN caches
|
||||
/// interposed between them and clients separately from other caches that might handle the response.
|
||||
///
|
||||
/// See [RFC 9213](https://www.rfc-editor.org/rfc/rfc9213) for full semantics.
|
||||
// TODO(breaking): replace with http's version
|
||||
pub const CDN_CACHE_CONTROL: HeaderName = HeaderName::from_static("cdn-cache-control");
|
||||
|
||||
/// Response header that prevents a document from loading any cross-origin resources that don't
|
||||
/// explicitly grant the document permission (using [CORP] or [CORS]).
|
||||
///
|
||||
/// [CORP]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cross-Origin_Resource_Policy_(CORP)
|
||||
/// [CORS]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
|
||||
pub const CROSS_ORIGIN_EMBEDDER_POLICY: HeaderName =
|
||||
HeaderName::from_static("cross-origin-embedder-policy");
|
||||
|
||||
/// Response header that allows you to ensure a top-level document does not share a browsing context
|
||||
/// group with cross-origin documents.
|
||||
pub const CROSS_ORIGIN_OPENER_POLICY: HeaderName =
|
||||
HeaderName::from_static("cross-origin-opener-policy");
|
||||
|
||||
/// Response header that conveys a desire that the browser blocks no-cors cross-origin/cross-site
|
||||
/// requests to the given resource.
|
||||
pub const CROSS_ORIGIN_RESOURCE_POLICY: HeaderName =
|
||||
HeaderName::from_static("cross-origin-resource-policy");
|
||||
|
||||
/// Response header that provides a mechanism to allow and deny the use of browser features in a
|
||||
/// document or within any `<iframe>` elements in the document.
|
||||
pub const PERMISSIONS_POLICY: HeaderName = HeaderName::from_static("permissions-policy");
|
||||
|
||||
/// Request header (de-facto standard) for identifying the originating IP address of a client
|
||||
/// connecting to a web server through a proxy server.
|
||||
pub const X_FORWARDED_FOR: HeaderName = HeaderName::from_static("x-forwarded-for");
|
||||
|
||||
/// Request header (de-facto standard) for identifying the original host requested by the client in
|
||||
/// the `Host` HTTP request header.
|
||||
pub const X_FORWARDED_HOST: HeaderName = HeaderName::from_static("x-forwarded-host");
|
||||
|
||||
/// Request header (de-facto standard) for identifying the protocol that a client used to connect to
|
||||
/// your proxy or load balancer.
|
||||
pub const X_FORWARDED_PROTO: HeaderName = HeaderName::from_static("x-forwarded-proto");
|
@ -150,9 +150,7 @@ impl HeaderMap {
|
||||
/// assert_eq!(map.len(), 3);
|
||||
/// ```
|
||||
pub fn len(&self) -> usize {
|
||||
self.inner
|
||||
.iter()
|
||||
.fold(0, |acc, (_, values)| acc + values.len())
|
||||
self.inner.values().map(|vals| vals.len()).sum()
|
||||
}
|
||||
|
||||
/// Returns the number of _keys_ stored in the map.
|
||||
@ -309,7 +307,7 @@ impl HeaderMap {
|
||||
pub fn get_all(&self, key: impl AsHeaderName) -> std::slice::Iter<'_, HeaderValue> {
|
||||
match self.get_value(key) {
|
||||
Some(value) => value.iter(),
|
||||
None => (&[]).iter(),
|
||||
None => [].iter(),
|
||||
}
|
||||
}
|
||||
|
||||
@ -552,6 +550,39 @@ impl HeaderMap {
|
||||
Keys(self.inner.keys())
|
||||
}
|
||||
|
||||
/// Retains only the headers specified by the predicate.
|
||||
///
|
||||
/// In other words, removes all headers `(name, val)` for which `retain_fn(&name, &mut val)`
|
||||
/// returns false.
|
||||
///
|
||||
/// The order in which headers are visited should be considered arbitrary.
|
||||
///
|
||||
/// # Examples
|
||||
/// ```
|
||||
/// # use actix_http::header::{self, HeaderMap, HeaderValue};
|
||||
/// let mut map = HeaderMap::new();
|
||||
///
|
||||
/// map.append(header::HOST, HeaderValue::from_static("duck.com"));
|
||||
/// map.append(header::SET_COOKIE, HeaderValue::from_static("one=1"));
|
||||
/// map.append(header::SET_COOKIE, HeaderValue::from_static("two=2"));
|
||||
///
|
||||
/// map.retain(|name, val| val.as_bytes().starts_with(b"one"));
|
||||
///
|
||||
/// assert_eq!(map.len(), 1);
|
||||
/// assert!(map.contains_key(&header::SET_COOKIE));
|
||||
/// ```
|
||||
pub fn retain<F>(&mut self, mut retain_fn: F)
|
||||
where
|
||||
F: FnMut(&HeaderName, &mut HeaderValue) -> bool,
|
||||
{
|
||||
self.inner.retain(|name, vals| {
|
||||
vals.inner.retain(|val| retain_fn(name, val));
|
||||
|
||||
// invariant: make sure newly empty value lists are removed
|
||||
!vals.is_empty()
|
||||
})
|
||||
}
|
||||
|
||||
/// Clears the map, returning all name-value sets as an iterator.
|
||||
///
|
||||
/// Header names will only be yielded for the first value in each set. All items that are
|
||||
@ -943,6 +974,55 @@ mod tests {
|
||||
assert!(map.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn retain() {
|
||||
let mut map = HeaderMap::new();
|
||||
|
||||
map.append(header::LOCATION, HeaderValue::from_static("/test"));
|
||||
map.append(header::HOST, HeaderValue::from_static("duck.com"));
|
||||
map.append(header::COOKIE, HeaderValue::from_static("one=1"));
|
||||
map.append(header::COOKIE, HeaderValue::from_static("two=2"));
|
||||
|
||||
assert_eq!(map.len(), 4);
|
||||
|
||||
// by value
|
||||
map.retain(|_, val| !val.as_bytes().contains(&b'/'));
|
||||
assert_eq!(map.len(), 3);
|
||||
|
||||
// by name
|
||||
map.retain(|name, _| name.as_str() != "cookie");
|
||||
assert_eq!(map.len(), 1);
|
||||
|
||||
// keep but mutate value
|
||||
map.retain(|_, val| {
|
||||
*val = HeaderValue::from_static("replaced");
|
||||
true
|
||||
});
|
||||
assert_eq!(map.len(), 1);
|
||||
assert_eq!(map.get("host").unwrap(), "replaced");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn retain_removes_empty_value_lists() {
|
||||
let mut map = HeaderMap::with_capacity(3);
|
||||
|
||||
map.append(header::HOST, HeaderValue::from_static("duck.com"));
|
||||
map.append(header::HOST, HeaderValue::from_static("duck.com"));
|
||||
|
||||
assert_eq!(map.len(), 2);
|
||||
assert_eq!(map.len_keys(), 1);
|
||||
assert_eq!(map.inner.len(), 1);
|
||||
assert_eq!(map.capacity(), 3);
|
||||
|
||||
// remove everything
|
||||
map.retain(|_n, _v| false);
|
||||
|
||||
assert_eq!(map.len(), 0);
|
||||
assert_eq!(map.len_keys(), 0);
|
||||
assert_eq!(map.inner.len(), 0);
|
||||
assert_eq!(map.capacity(), 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn entries_into_iter() {
|
||||
let mut map = HeaderMap::new();
|
||||
|
@ -1,14 +1,18 @@
|
||||
//! Pre-defined `HeaderName`s, traits for parsing and conversion, and other header utility methods.
|
||||
|
||||
// declaring new header consts will yield this error
|
||||
#![allow(clippy::declare_interior_mutable_const)]
|
||||
|
||||
use percent_encoding::{AsciiSet, CONTROLS};
|
||||
|
||||
// re-export from http except header map related items
|
||||
pub use http::header::{
|
||||
pub use ::http::header::{
|
||||
HeaderName, HeaderValue, InvalidHeaderName, InvalidHeaderValue, ToStrError,
|
||||
};
|
||||
|
||||
// re-export const header names
|
||||
pub use http::header::{
|
||||
// re-export const header names, list is explicit so that any updates to `common` module do not
|
||||
// conflict with this set
|
||||
pub use ::http::header::{
|
||||
ACCEPT, ACCEPT_CHARSET, ACCEPT_ENCODING, ACCEPT_LANGUAGE, ACCEPT_RANGES,
|
||||
ACCESS_CONTROL_ALLOW_CREDENTIALS, ACCESS_CONTROL_ALLOW_HEADERS,
|
||||
ACCESS_CONTROL_ALLOW_METHODS, ACCESS_CONTROL_ALLOW_ORIGIN, ACCESS_CONTROL_EXPOSE_HEADERS,
|
||||
@ -30,22 +34,30 @@ pub use http::header::{
|
||||
use crate::{error::ParseError, HttpMessage};
|
||||
|
||||
mod as_name;
|
||||
mod common;
|
||||
mod into_pair;
|
||||
mod into_value;
|
||||
pub mod map;
|
||||
mod shared;
|
||||
mod utils;
|
||||
|
||||
pub use self::as_name::AsHeaderName;
|
||||
pub use self::into_pair::TryIntoHeaderPair;
|
||||
pub use self::into_value::TryIntoHeaderValue;
|
||||
pub use self::map::HeaderMap;
|
||||
pub use self::shared::{
|
||||
parse_extended_value, q, Charset, ContentEncoding, ExtendedValue, HttpDate, LanguageTag,
|
||||
Quality, QualityItem,
|
||||
pub use self::{
|
||||
as_name::AsHeaderName,
|
||||
into_pair::TryIntoHeaderPair,
|
||||
into_value::TryIntoHeaderValue,
|
||||
map::HeaderMap,
|
||||
shared::{
|
||||
parse_extended_value, q, Charset, ContentEncoding, ExtendedValue, HttpDate,
|
||||
LanguageTag, Quality, QualityItem,
|
||||
},
|
||||
utils::{fmt_comma_delimited, from_comma_delimited, from_one_raw_str, http_percent_encode},
|
||||
};
|
||||
pub use self::utils::{
|
||||
fmt_comma_delimited, from_comma_delimited, from_one_raw_str, http_percent_encode,
|
||||
|
||||
// re-export list is explicit so that any updates to `http` do not conflict with this set
|
||||
pub use self::common::{
|
||||
CACHE_STATUS, CDN_CACHE_CONTROL, CROSS_ORIGIN_EMBEDDER_POLICY, CROSS_ORIGIN_OPENER_POLICY,
|
||||
CROSS_ORIGIN_RESOURCE_POLICY, PERMISSIONS_POLICY, X_FORWARDED_FOR, X_FORWARDED_HOST,
|
||||
X_FORWARDED_PROTO,
|
||||
};
|
||||
|
||||
/// An interface for types that already represent a valid header.
|
||||
|
@ -12,7 +12,7 @@ use crate::header::{Charset, HTTP_VALUE};
|
||||
/// - A character sequence representing the actual value (`value`), separated by single quotes.
|
||||
///
|
||||
/// It is defined in [RFC 5987 §3.2](https://datatracker.ietf.org/doc/html/rfc5987#section-3.2).
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
#[derive(Clone, Debug, PartialEq, Eq)]
|
||||
pub struct ExtendedValue {
|
||||
/// The character set that is used to encode the `value` to a string.
|
||||
pub charset: Charset,
|
||||
|
@ -147,7 +147,7 @@ mod tests {
|
||||
|
||||
// copy of encoding from actix-web headers
|
||||
#[allow(clippy::enum_variant_names)] // allow Encoding prefix on EncodingExt
|
||||
#[derive(Clone, PartialEq, Debug)]
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub enum Encoding {
|
||||
Chunked,
|
||||
Brotli,
|
||||
|
@ -21,10 +21,12 @@
|
||||
#![allow(
|
||||
clippy::type_complexity,
|
||||
clippy::too_many_arguments,
|
||||
clippy::borrow_interior_mutable_const
|
||||
clippy::borrow_interior_mutable_const,
|
||||
clippy::uninlined_format_args
|
||||
)]
|
||||
#![doc(html_logo_url = "https://actix.rs/img/logo.png")]
|
||||
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
|
||||
pub use ::http::{uri, uri::Uri};
|
||||
pub use ::http::{Method, StatusCode, Version};
|
||||
@ -69,6 +71,8 @@ pub use self::payload::{BoxedPayloadStream, Payload, PayloadStream};
|
||||
pub use self::requests::{Request, RequestHead, RequestHeadType};
|
||||
pub use self::responses::{Response, ResponseBuilder, ResponseHead};
|
||||
pub use self::service::HttpService;
|
||||
#[cfg(any(feature = "openssl", feature = "rustls"))]
|
||||
pub use self::service::TlsAcceptorConfig;
|
||||
|
||||
/// A major HTTP protocol version.
|
||||
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
|
||||
|
@ -3,7 +3,7 @@ use std::{cell::RefCell, ops, rc::Rc};
|
||||
use bitflags::bitflags;
|
||||
|
||||
/// Represents various types of connection
|
||||
#[derive(Copy, Clone, PartialEq, Debug)]
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum ConnectionType {
|
||||
/// Close connection after response.
|
||||
Close,
|
||||
|
@ -97,12 +97,10 @@ where
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::panic::{RefUnwindSafe, UnwindSafe};
|
||||
|
||||
use static_assertions::{assert_impl_all, assert_not_impl_any};
|
||||
|
||||
use super::*;
|
||||
|
||||
assert_impl_all!(Payload: Unpin);
|
||||
assert_not_impl_any!(Payload: Send, Sync, UnwindSafe, RefUnwindSafe);
|
||||
assert_not_impl_any!(Payload: Send, Sync);
|
||||
}
|
||||
|
@ -113,14 +113,14 @@ impl<P> Request<P> {
|
||||
#[inline]
|
||||
/// Http message part of the request
|
||||
pub fn head(&self) -> &RequestHead {
|
||||
&*self.head
|
||||
&self.head
|
||||
}
|
||||
|
||||
#[inline]
|
||||
#[doc(hidden)]
|
||||
/// Mutable reference to a HTTP message part of the request
|
||||
pub fn head_mut(&mut self) -> &mut RequestHead {
|
||||
&mut *self.head
|
||||
&mut self.head
|
||||
}
|
||||
|
||||
/// Mutable reference to the message's headers.
|
||||
|
@ -237,7 +237,7 @@ mod tests {
|
||||
.await;
|
||||
|
||||
let mut stream = net::TcpStream::connect(srv.addr()).unwrap();
|
||||
let _ = stream
|
||||
stream
|
||||
.write_all(b"GET /camel HTTP/1.1\r\nConnection: Close\r\n\r\n")
|
||||
.unwrap();
|
||||
let mut data = vec![];
|
||||
@ -251,7 +251,7 @@ mod tests {
|
||||
assert!(memmem::find(&data, b"content-length").is_none());
|
||||
|
||||
let mut stream = net::TcpStream::connect(srv.addr()).unwrap();
|
||||
let _ = stream
|
||||
stream
|
||||
.write_all(b"GET /lower HTTP/1.1\r\nConnection: Close\r\n\r\n")
|
||||
.unwrap();
|
||||
let mut data = vec![];
|
||||
|
@ -83,13 +83,13 @@ impl<B> Response<B> {
|
||||
/// Returns a reference to the head of this response.
|
||||
#[inline]
|
||||
pub fn head(&self) -> &ResponseHead {
|
||||
&*self.head
|
||||
&self.head
|
||||
}
|
||||
|
||||
/// Returns a mutable reference to the head of this response.
|
||||
#[inline]
|
||||
pub fn head_mut(&mut self) -> &mut ResponseHead {
|
||||
&mut *self.head
|
||||
&mut self.head
|
||||
}
|
||||
|
||||
/// Returns the status code of this response.
|
||||
|
@ -24,7 +24,39 @@ use crate::{
|
||||
h1, ConnectCallback, OnConnectData, Protocol, Request, Response, ServiceConfig,
|
||||
};
|
||||
|
||||
/// A `ServiceFactory` for HTTP/1.1 or HTTP/2 protocol.
|
||||
/// A [`ServiceFactory`] for HTTP/1.1 and HTTP/2 connections.
|
||||
///
|
||||
/// Use [`build`](Self::build) to begin constructing service. Also see [`HttpServiceBuilder`].
|
||||
///
|
||||
/// # Automatic HTTP Version Selection
|
||||
/// There are two ways to select the HTTP version of an incoming connection:
|
||||
/// - One is to rely on the ALPN information that is provided when using a TLS (HTTPS); both
|
||||
/// versions are supported automatically when using either of the `.rustls()` or `.openssl()`
|
||||
/// finalizing methods.
|
||||
/// - The other is to read the first few bytes of the TCP stream. This is the only viable approach
|
||||
/// for supporting H2C, which allows the HTTP/2 protocol to work over plaintext connections. Use
|
||||
/// the `.tcp_auto_h2c()` finalizing method to enable this behavior.
|
||||
///
|
||||
/// # Examples
|
||||
/// ```
|
||||
/// # use std::convert::Infallible;
|
||||
/// use actix_http::{HttpService, Request, Response, StatusCode};
|
||||
///
|
||||
/// // this service would constructed in an actix_server::Server
|
||||
///
|
||||
/// # actix_rt::System::new().block_on(async {
|
||||
/// HttpService::build()
|
||||
/// // the builder finalizing method, other finalizers would not return an `HttpService`
|
||||
/// .finish(|_req: Request| async move {
|
||||
/// Ok::<_, Infallible>(
|
||||
/// Response::build(StatusCode::OK).body("Hello!")
|
||||
/// )
|
||||
/// })
|
||||
/// // the service finalizing method method
|
||||
/// // you can use `.tcp_auto_h2c()`, `.rustls()`, or `.openssl()` instead of `.tcp()`
|
||||
/// .tcp();
|
||||
/// # })
|
||||
/// ```
|
||||
pub struct HttpService<T, S, B, X = h1::ExpectHandler, U = h1::UpgradeHandler> {
|
||||
srv: S,
|
||||
cfg: ServiceConfig,
|
||||
@ -163,7 +195,9 @@ where
|
||||
U::Error: fmt::Display + Into<Response<BoxBody>>,
|
||||
U::InitError: fmt::Debug,
|
||||
{
|
||||
/// Create simple tcp stream service
|
||||
/// Creates TCP stream service from HTTP service.
|
||||
///
|
||||
/// The resulting service only supports HTTP/1.x.
|
||||
pub fn tcp(
|
||||
self,
|
||||
) -> impl ServiceFactory<
|
||||
@ -179,6 +213,59 @@ where
|
||||
})
|
||||
.and_then(self)
|
||||
}
|
||||
|
||||
/// Creates TCP stream service from HTTP service that automatically selects HTTP/1.x or HTTP/2
|
||||
/// on plaintext connections.
|
||||
#[cfg(feature = "http2")]
|
||||
pub fn tcp_auto_h2c(
|
||||
self,
|
||||
) -> impl ServiceFactory<
|
||||
TcpStream,
|
||||
Config = (),
|
||||
Response = (),
|
||||
Error = DispatchError,
|
||||
InitError = (),
|
||||
> {
|
||||
fn_service(move |io: TcpStream| async move {
|
||||
// subset of HTTP/2 preface defined by RFC 9113 §3.4
|
||||
// this subset was chosen to maximize likelihood that peeking only once will allow us to
|
||||
// reliably determine version or else it should fallback to h1 and fail quickly if data
|
||||
// on the wire is junk
|
||||
const H2_PREFACE: &[u8] = b"PRI * HTTP/2";
|
||||
|
||||
let mut buf = [0; 12];
|
||||
|
||||
io.peek(&mut buf).await?;
|
||||
|
||||
let proto = if buf == H2_PREFACE {
|
||||
Protocol::Http2
|
||||
} else {
|
||||
Protocol::Http1
|
||||
};
|
||||
|
||||
let peer_addr = io.peer_addr().ok();
|
||||
Ok((io, proto, peer_addr))
|
||||
})
|
||||
.and_then(self)
|
||||
}
|
||||
}
|
||||
|
||||
/// Configuration options used when accepting TLS connection.
|
||||
#[cfg(any(feature = "openssl", feature = "rustls"))]
|
||||
#[derive(Debug, Default)]
|
||||
pub struct TlsAcceptorConfig {
|
||||
pub(crate) handshake_timeout: Option<std::time::Duration>,
|
||||
}
|
||||
|
||||
#[cfg(any(feature = "openssl", feature = "rustls"))]
|
||||
impl TlsAcceptorConfig {
|
||||
/// Set TLS handshake timeout duration.
|
||||
pub fn handshake_timeout(self, dur: std::time::Duration) -> Self {
|
||||
Self {
|
||||
handshake_timeout: Some(dur),
|
||||
// ..self
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "openssl")]
|
||||
@ -230,7 +317,28 @@ mod openssl {
|
||||
Error = TlsError<SslError, DispatchError>,
|
||||
InitError = (),
|
||||
> {
|
||||
Acceptor::new(acceptor)
|
||||
self.openssl_with_config(acceptor, TlsAcceptorConfig::default())
|
||||
}
|
||||
|
||||
/// Create OpenSSL based service with custom TLS acceptor configuration.
|
||||
pub fn openssl_with_config(
|
||||
self,
|
||||
acceptor: SslAcceptor,
|
||||
tls_acceptor_config: TlsAcceptorConfig,
|
||||
) -> impl ServiceFactory<
|
||||
TcpStream,
|
||||
Config = (),
|
||||
Response = (),
|
||||
Error = TlsError<SslError, DispatchError>,
|
||||
InitError = (),
|
||||
> {
|
||||
let mut acceptor = Acceptor::new(acceptor);
|
||||
|
||||
if let Some(handshake_timeout) = tls_acceptor_config.handshake_timeout {
|
||||
acceptor.set_handshake_timeout(handshake_timeout);
|
||||
}
|
||||
|
||||
acceptor
|
||||
.map_init_err(|_| {
|
||||
unreachable!("TLS acceptor service factory does not error on init")
|
||||
})
|
||||
@ -293,8 +401,23 @@ mod rustls {
|
||||
{
|
||||
/// Create Rustls based service.
|
||||
pub fn rustls(
|
||||
self,
|
||||
config: ServerConfig,
|
||||
) -> impl ServiceFactory<
|
||||
TcpStream,
|
||||
Config = (),
|
||||
Response = (),
|
||||
Error = TlsError<io::Error, DispatchError>,
|
||||
InitError = (),
|
||||
> {
|
||||
self.rustls_with_config(config, TlsAcceptorConfig::default())
|
||||
}
|
||||
|
||||
/// Create Rustls based service with custom TLS acceptor configuration.
|
||||
pub fn rustls_with_config(
|
||||
self,
|
||||
mut config: ServerConfig,
|
||||
tls_acceptor_config: TlsAcceptorConfig,
|
||||
) -> impl ServiceFactory<
|
||||
TcpStream,
|
||||
Config = (),
|
||||
@ -306,7 +429,13 @@ mod rustls {
|
||||
protos.extend_from_slice(&config.alpn_protocols);
|
||||
config.alpn_protocols = protos;
|
||||
|
||||
Acceptor::new(config)
|
||||
let mut acceptor = Acceptor::new(config);
|
||||
|
||||
if let Some(handshake_timeout) = tls_acceptor_config.handshake_timeout {
|
||||
acceptor.set_handshake_timeout(handshake_timeout);
|
||||
}
|
||||
|
||||
acceptor
|
||||
.map_init_err(|_| {
|
||||
unreachable!("TLS acceptor service factory does not error on init")
|
||||
})
|
||||
|
@ -1,7 +1,7 @@
|
||||
use actix_codec::{Decoder, Encoder};
|
||||
use bitflags::bitflags;
|
||||
use bytes::{Bytes, BytesMut};
|
||||
use bytestring::ByteString;
|
||||
use tokio_util::codec::{Decoder, Encoder};
|
||||
use tracing::error;
|
||||
|
||||
use super::{
|
||||
@ -11,7 +11,7 @@ use super::{
|
||||
};
|
||||
|
||||
/// A WebSocket message.
|
||||
#[derive(Debug, PartialEq)]
|
||||
#[derive(Debug, PartialEq, Eq)]
|
||||
pub enum Message {
|
||||
/// Text message.
|
||||
Text(ByteString),
|
||||
@ -36,7 +36,7 @@ pub enum Message {
|
||||
}
|
||||
|
||||
/// A WebSocket frame.
|
||||
#[derive(Debug, PartialEq)]
|
||||
#[derive(Debug, PartialEq, Eq)]
|
||||
pub enum Frame {
|
||||
/// Text frame. Note that the codec does not validate UTF-8 encoding.
|
||||
Text(Bytes),
|
||||
@ -58,7 +58,7 @@ pub enum Frame {
|
||||
}
|
||||
|
||||
/// A WebSocket continuation item.
|
||||
#[derive(Debug, PartialEq)]
|
||||
#[derive(Debug, PartialEq, Eq)]
|
||||
pub enum Item {
|
||||
FirstText(Bytes),
|
||||
FirstBinary(Bytes),
|
||||
|
@ -76,7 +76,9 @@ mod inner {
|
||||
use pin_project_lite::pin_project;
|
||||
use tracing::debug;
|
||||
|
||||
use actix_codec::{AsyncRead, AsyncWrite, Decoder, Encoder, Framed};
|
||||
use actix_codec::Framed;
|
||||
use tokio::io::{AsyncRead, AsyncWrite};
|
||||
use tokio_util::codec::{Decoder, Encoder};
|
||||
|
||||
use crate::{body::BoxBody, Response};
|
||||
|
||||
|
@ -17,7 +17,6 @@ impl Parser {
|
||||
fn parse_metadata(
|
||||
src: &[u8],
|
||||
server: bool,
|
||||
max_size: usize,
|
||||
) -> Result<Option<(usize, bool, OpCode, usize, Option<[u8; 4]>)>, ProtocolError> {
|
||||
let chunk_len = src.len();
|
||||
|
||||
@ -60,20 +59,12 @@ impl Parser {
|
||||
return Ok(None);
|
||||
}
|
||||
let len = u64::from_be_bytes(TryFrom::try_from(&src[idx..idx + 8]).unwrap());
|
||||
if len > max_size as u64 {
|
||||
return Err(ProtocolError::Overflow);
|
||||
}
|
||||
idx += 8;
|
||||
len as usize
|
||||
} else {
|
||||
len as usize
|
||||
};
|
||||
|
||||
// check for max allowed size
|
||||
if length > max_size {
|
||||
return Err(ProtocolError::Overflow);
|
||||
}
|
||||
|
||||
let mask = if server {
|
||||
if chunk_len < idx + 4 {
|
||||
return Ok(None);
|
||||
@ -98,11 +89,10 @@ impl Parser {
|
||||
max_size: usize,
|
||||
) -> Result<Option<(bool, OpCode, Option<BytesMut>)>, ProtocolError> {
|
||||
// try to parse ws frame metadata
|
||||
let (idx, finished, opcode, length, mask) =
|
||||
match Parser::parse_metadata(src, server, max_size)? {
|
||||
None => return Ok(None),
|
||||
Some(res) => res,
|
||||
};
|
||||
let (idx, finished, opcode, length, mask) = match Parser::parse_metadata(src, server)? {
|
||||
None => return Ok(None),
|
||||
Some(res) => res,
|
||||
};
|
||||
|
||||
// not enough data
|
||||
if src.len() < idx + length {
|
||||
@ -112,6 +102,13 @@ impl Parser {
|
||||
// remove prefix
|
||||
src.advance(idx);
|
||||
|
||||
// check for max allowed size
|
||||
if length > max_size {
|
||||
// drop the payload
|
||||
src.advance(length);
|
||||
return Err(ProtocolError::Overflow);
|
||||
}
|
||||
|
||||
// no need for body
|
||||
if length == 0 {
|
||||
return Ok(Some((finished, opcode, None)));
|
||||
@ -316,7 +313,7 @@ mod tests {
|
||||
#[test]
|
||||
fn test_parse_frame_no_mask() {
|
||||
let mut buf = BytesMut::from(&[0b0000_0001u8, 0b0000_0001u8][..]);
|
||||
buf.extend(&[1u8]);
|
||||
buf.extend([1u8]);
|
||||
|
||||
assert!(Parser::parse(&mut buf, true, 1024).is_err());
|
||||
|
||||
@ -329,7 +326,7 @@ mod tests {
|
||||
#[test]
|
||||
fn test_parse_frame_max_size() {
|
||||
let mut buf = BytesMut::from(&[0b0000_0001u8, 0b0000_0010u8][..]);
|
||||
buf.extend(&[1u8, 1u8]);
|
||||
buf.extend([1u8, 1u8]);
|
||||
|
||||
assert!(Parser::parse(&mut buf, true, 1).is_err());
|
||||
|
||||
@ -339,6 +336,30 @@ mod tests {
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_parse_frame_max_size_recoverability() {
|
||||
let mut buf = BytesMut::new();
|
||||
// The first text frame with length == 2, payload doesn't matter.
|
||||
buf.extend([0b0000_0001u8, 0b0000_0010u8, 0b0000_0000u8, 0b0000_0000u8]);
|
||||
// Next binary frame with length == 2 and payload == `[0x1111_1111u8, 0x1111_1111u8]`.
|
||||
buf.extend([0b0000_0010u8, 0b0000_0010u8, 0b1111_1111u8, 0b1111_1111u8]);
|
||||
|
||||
assert_eq!(buf.len(), 8);
|
||||
assert!(matches!(
|
||||
Parser::parse(&mut buf, false, 1),
|
||||
Err(ProtocolError::Overflow)
|
||||
));
|
||||
assert_eq!(buf.len(), 4);
|
||||
let frame = extract(Parser::parse(&mut buf, false, 2));
|
||||
assert!(!frame.finished);
|
||||
assert_eq!(frame.opcode, OpCode::Binary);
|
||||
assert_eq!(
|
||||
frame.payload,
|
||||
Bytes::from(vec![0b1111_1111u8, 0b1111_1111u8])
|
||||
);
|
||||
assert_eq!(buf.len(), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ping_frame() {
|
||||
let mut buf = BytesMut::new();
|
||||
|
@ -67,7 +67,7 @@ pub enum ProtocolError {
|
||||
}
|
||||
|
||||
/// WebSocket handshake errors
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Display, Error)]
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Display, Error)]
|
||||
pub enum HandshakeError {
|
||||
/// Only get method is allowed.
|
||||
#[display(fmt = "Method not allowed.")]
|
||||
|
@ -3,6 +3,7 @@ use std::{
|
||||
fmt,
|
||||
};
|
||||
|
||||
use base64::prelude::*;
|
||||
use tracing::error;
|
||||
|
||||
/// Operation codes defined in [RFC 6455 §11.8].
|
||||
@ -244,7 +245,7 @@ pub fn hash_key(key: &[u8]) -> [u8; 28] {
|
||||
};
|
||||
|
||||
let mut hash_b64 = [0; 28];
|
||||
let n = base64::encode_config_slice(&hash, base64::STANDARD, &mut hash_b64);
|
||||
let n = BASE64_STANDARD.encode_slice(hash, &mut hash_b64).unwrap();
|
||||
assert_eq!(n, 28);
|
||||
|
||||
hash_b64
|
||||
|
@ -1,14 +1,15 @@
|
||||
#![cfg(feature = "openssl")]
|
||||
#![allow(clippy::uninlined_format_args)]
|
||||
|
||||
extern crate tls_openssl as openssl;
|
||||
|
||||
use std::{convert::Infallible, io};
|
||||
use std::{convert::Infallible, io, time::Duration};
|
||||
|
||||
use actix_http::{
|
||||
body::{BodyStream, BoxBody, SizedStream},
|
||||
error::PayloadError,
|
||||
header::{self, HeaderValue},
|
||||
Error, HttpService, Method, Request, Response, StatusCode, Version,
|
||||
Error, HttpService, Method, Request, Response, StatusCode, TlsAcceptorConfig, Version,
|
||||
};
|
||||
use actix_http_test::test_server;
|
||||
use actix_service::{fn_service, ServiceFactoryExt};
|
||||
@ -16,7 +17,7 @@ use actix_utils::future::{err, ok, ready};
|
||||
use bytes::{Bytes, BytesMut};
|
||||
use derive_more::{Display, Error};
|
||||
use futures_core::Stream;
|
||||
use futures_util::stream::{once, StreamExt as _};
|
||||
use futures_util::{stream::once, StreamExt as _};
|
||||
use openssl::{
|
||||
pkey::PKey,
|
||||
ssl::{SslAcceptor, SslMethod},
|
||||
@ -89,7 +90,10 @@ async fn h2_1() -> io::Result<()> {
|
||||
assert_eq!(req.version(), Version::HTTP_2);
|
||||
ok::<_, Error>(Response::ok())
|
||||
})
|
||||
.openssl(tls_config())
|
||||
.openssl_with_config(
|
||||
tls_config(),
|
||||
TlsAcceptorConfig::default().handshake_timeout(Duration::from_secs(5)),
|
||||
)
|
||||
.map_err(|_| ())
|
||||
})
|
||||
.await;
|
||||
|
@ -1,4 +1,5 @@
|
||||
#![cfg(feature = "rustls")]
|
||||
#![allow(clippy::uninlined_format_args)]
|
||||
|
||||
extern crate tls_rustls as rustls;
|
||||
|
||||
@ -8,13 +9,14 @@ use std::{
|
||||
net::{SocketAddr, TcpStream as StdTcpStream},
|
||||
sync::Arc,
|
||||
task::Poll,
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
use actix_http::{
|
||||
body::{BodyStream, BoxBody, SizedStream},
|
||||
error::PayloadError,
|
||||
header::{self, HeaderName, HeaderValue},
|
||||
Error, HttpService, Method, Request, Response, StatusCode, Version,
|
||||
Error, HttpService, Method, Request, Response, StatusCode, TlsAcceptorConfig, Version,
|
||||
};
|
||||
use actix_http_test::test_server;
|
||||
use actix_rt::pin;
|
||||
@ -40,7 +42,7 @@ where
|
||||
let body = stream.as_mut();
|
||||
|
||||
match ready!(body.poll_next(cx)) {
|
||||
Some(Ok(bytes)) => buf.extend_from_slice(&*bytes),
|
||||
Some(Ok(bytes)) => buf.extend_from_slice(&bytes),
|
||||
None => return Poll::Ready(Ok(())),
|
||||
Some(Err(err)) => return Poll::Ready(Err(err)),
|
||||
}
|
||||
@ -160,7 +162,10 @@ async fn h2_1() -> io::Result<()> {
|
||||
assert_eq!(req.version(), Version::HTTP_2);
|
||||
ok::<_, Error>(Response::ok())
|
||||
})
|
||||
.rustls(tls_config())
|
||||
.rustls_with_config(
|
||||
tls_config(),
|
||||
TlsAcceptorConfig::default().handshake_timeout(Duration::from_secs(5)),
|
||||
)
|
||||
})
|
||||
.await;
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
#![allow(clippy::uninlined_format_args)]
|
||||
|
||||
use std::{
|
||||
convert::Infallible,
|
||||
io::{Read, Write},
|
||||
@ -7,18 +9,15 @@ use std::{
|
||||
|
||||
use actix_http::{
|
||||
body::{self, BodyStream, BoxBody, SizedStream},
|
||||
header, Error, HttpService, KeepAlive, Request, Response, StatusCode,
|
||||
header, Error, HttpService, KeepAlive, Request, Response, StatusCode, Version,
|
||||
};
|
||||
use actix_http_test::test_server;
|
||||
use actix_rt::time::sleep;
|
||||
use actix_rt::{net::TcpStream, time::sleep};
|
||||
use actix_service::fn_service;
|
||||
use actix_utils::future::{err, ok, ready};
|
||||
use bytes::Bytes;
|
||||
use derive_more::{Display, Error};
|
||||
use futures_util::{
|
||||
stream::{once, StreamExt as _},
|
||||
FutureExt as _,
|
||||
};
|
||||
use futures_util::{stream::once, FutureExt as _, StreamExt as _};
|
||||
use regex::Regex;
|
||||
|
||||
#[actix_rt::test]
|
||||
@ -858,3 +857,44 @@ async fn not_modified_spec_h1() {
|
||||
|
||||
srv.stop().await;
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn h2c_auto() {
|
||||
let mut srv = test_server(|| {
|
||||
HttpService::build()
|
||||
.keep_alive(KeepAlive::Disabled)
|
||||
.finish(|req: Request| {
|
||||
let body = match req.version() {
|
||||
Version::HTTP_11 => "h1",
|
||||
Version::HTTP_2 => "h2",
|
||||
_ => unreachable!(),
|
||||
};
|
||||
ok::<_, Infallible>(Response::ok().set_body(body))
|
||||
})
|
||||
.tcp_auto_h2c()
|
||||
})
|
||||
.await;
|
||||
|
||||
let req = srv.get("/");
|
||||
assert_eq!(req.get_version(), &Version::HTTP_11);
|
||||
let mut res = req.send().await.unwrap();
|
||||
assert!(res.status().is_success());
|
||||
assert_eq!(res.body().await.unwrap(), &b"h1"[..]);
|
||||
|
||||
// awc doesn't support forcing the version to http/2 so use h2 manually
|
||||
|
||||
let tcp = TcpStream::connect(srv.addr()).await.unwrap();
|
||||
let (h2, connection) = h2::client::handshake(tcp).await.unwrap();
|
||||
tokio::spawn(async move { connection.await.unwrap() });
|
||||
let mut h2 = h2.ready().await.unwrap();
|
||||
|
||||
let request = ::http::Request::new(());
|
||||
let (response, _) = h2.send_request(request, true).unwrap();
|
||||
let (head, mut body) = response.await.unwrap().into_parts();
|
||||
let body = body.data().await.unwrap().unwrap();
|
||||
|
||||
assert!(head.status.is_success());
|
||||
assert_eq!(body, &b"h2"[..]);
|
||||
|
||||
srv.stop().await;
|
||||
}
|
||||
|
@ -1,3 +1,5 @@
|
||||
#![allow(clippy::uninlined_format_args)]
|
||||
|
||||
use std::{
|
||||
cell::Cell,
|
||||
convert::Infallible,
|
||||
|
5
actix-multipart-derive/CHANGES.md
Normal file
5
actix-multipart-derive/CHANGES.md
Normal file
@ -0,0 +1,5 @@
|
||||
# Changes
|
||||
|
||||
## 0.6.0 - 2023-02-26
|
||||
|
||||
- Add `MultipartForm` derive macro.
|
30
actix-multipart-derive/Cargo.toml
Normal file
30
actix-multipart-derive/Cargo.toml
Normal file
@ -0,0 +1,30 @@
|
||||
[package]
|
||||
name = "actix-multipart-derive"
|
||||
version = "0.6.0"
|
||||
authors = ["Jacob Halsey <jacob@jhalsey.com>"]
|
||||
description = "Multipart form derive macro for Actix Web"
|
||||
keywords = ["http", "web", "framework", "async", "futures"]
|
||||
homepage = "https://actix.rs"
|
||||
repository = "https://github.com/actix/actix-web.git"
|
||||
license = "MIT OR Apache-2.0"
|
||||
edition = "2018"
|
||||
|
||||
[package.metadata.docs.rs]
|
||||
rustdoc-args = ["--cfg", "docsrs"]
|
||||
all-features = true
|
||||
|
||||
[lib]
|
||||
proc-macro = true
|
||||
|
||||
[dependencies]
|
||||
darling = "0.14"
|
||||
parse-size = "1"
|
||||
proc-macro2 = "1"
|
||||
quote = "1"
|
||||
syn = "1"
|
||||
|
||||
[dev-dependencies]
|
||||
actix-multipart = "0.6"
|
||||
actix-web = "4"
|
||||
rustversion = "1"
|
||||
trybuild = "1"
|
1
actix-multipart-derive/LICENSE-APACHE
Symbolic link
1
actix-multipart-derive/LICENSE-APACHE
Symbolic link
@ -0,0 +1 @@
|
||||
../LICENSE-APACHE
|
1
actix-multipart-derive/LICENSE-MIT
Symbolic link
1
actix-multipart-derive/LICENSE-MIT
Symbolic link
@ -0,0 +1 @@
|
||||
../LICENSE-MIT
|
17
actix-multipart-derive/README.md
Normal file
17
actix-multipart-derive/README.md
Normal file
@ -0,0 +1,17 @@
|
||||
# actix-multipart-derive
|
||||
|
||||
> The derive macro implementation for actix-multipart-derive.
|
||||
|
||||
[](https://crates.io/crates/actix-multipart-derive)
|
||||
[](https://docs.rs/actix-multipart-derive/0.5.0)
|
||||

|
||||

|
||||
<br />
|
||||
[](https://deps.rs/crate/actix-multipart-derive/0.5.0)
|
||||
[](https://crates.io/crates/actix-multipart-derive)
|
||||
[](https://discord.gg/NWpN5mmg3x)
|
||||
|
||||
## Documentation & Resources
|
||||
|
||||
- [API Documentation](https://docs.rs/actix-multipart-derive)
|
||||
- Minimum Supported Rust Version (MSRV): 1.59
|
315
actix-multipart-derive/src/lib.rs
Normal file
315
actix-multipart-derive/src/lib.rs
Normal file
@ -0,0 +1,315 @@
|
||||
//! Multipart form derive macro for Actix Web.
|
||||
//!
|
||||
//! See [`macro@MultipartForm`] for usage examples.
|
||||
|
||||
#![deny(rust_2018_idioms, nonstandard_style)]
|
||||
#![warn(future_incompatible)]
|
||||
#![doc(html_logo_url = "https://actix.rs/img/logo.png")]
|
||||
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
|
||||
use std::{collections::HashSet, convert::TryFrom as _};
|
||||
|
||||
use darling::{FromDeriveInput, FromField, FromMeta};
|
||||
use parse_size::parse_size;
|
||||
use proc_macro::TokenStream;
|
||||
use proc_macro2::Ident;
|
||||
use quote::quote;
|
||||
use syn::{parse_macro_input, Type};
|
||||
|
||||
#[derive(FromMeta)]
|
||||
enum DuplicateField {
|
||||
Ignore,
|
||||
Deny,
|
||||
Replace,
|
||||
}
|
||||
|
||||
impl Default for DuplicateField {
|
||||
fn default() -> Self {
|
||||
Self::Ignore
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(FromDeriveInput, Default)]
|
||||
#[darling(attributes(multipart), default)]
|
||||
struct MultipartFormAttrs {
|
||||
deny_unknown_fields: bool,
|
||||
duplicate_field: DuplicateField,
|
||||
}
|
||||
|
||||
#[derive(FromField, Default)]
|
||||
#[darling(attributes(multipart), default)]
|
||||
struct FieldAttrs {
|
||||
rename: Option<String>,
|
||||
limit: Option<String>,
|
||||
}
|
||||
|
||||
struct ParsedField<'t> {
|
||||
serialization_name: String,
|
||||
rust_name: &'t Ident,
|
||||
limit: Option<usize>,
|
||||
ty: &'t Type,
|
||||
}
|
||||
|
||||
/// Implements `MultipartCollect` for a struct so that it can be used with the `MultipartForm`
|
||||
/// extractor.
|
||||
///
|
||||
/// # Basic Use
|
||||
///
|
||||
/// Each field type should implement the `FieldReader` trait:
|
||||
///
|
||||
/// ```
|
||||
/// use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
|
||||
///
|
||||
/// #[derive(MultipartForm)]
|
||||
/// struct ImageUpload {
|
||||
/// description: Text<String>,
|
||||
/// timestamp: Text<i64>,
|
||||
/// image: TempFile,
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// # Optional and List Fields
|
||||
///
|
||||
/// You can also use `Vec<T>` and `Option<T>` provided that `T: FieldReader`.
|
||||
///
|
||||
/// A [`Vec`] field corresponds to an upload with multiple parts under the [same field
|
||||
/// name](https://www.rfc-editor.org/rfc/rfc7578#section-4.3).
|
||||
///
|
||||
/// ```
|
||||
/// use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
|
||||
///
|
||||
/// #[derive(MultipartForm)]
|
||||
/// struct Form {
|
||||
/// category: Option<Text<String>>,
|
||||
/// files: Vec<TempFile>,
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// # Field Renaming
|
||||
///
|
||||
/// You can use the `#[multipart(rename = "foo")]` attribute to receive a field by a different name.
|
||||
///
|
||||
/// ```
|
||||
/// use actix_multipart::form::{tempfile::TempFile, MultipartForm};
|
||||
///
|
||||
/// #[derive(MultipartForm)]
|
||||
/// struct Form {
|
||||
/// #[multipart(rename = "files[]")]
|
||||
/// files: Vec<TempFile>,
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// # Field Limits
|
||||
///
|
||||
/// You can use the `#[multipart(limit = "<size>")]` attribute to set field level limits. The limit
|
||||
/// string is parsed using [parse_size].
|
||||
///
|
||||
/// Note: the form is also subject to the global limits configured using `MultipartFormConfig`.
|
||||
///
|
||||
/// ```
|
||||
/// use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
|
||||
///
|
||||
/// #[derive(MultipartForm)]
|
||||
/// struct Form {
|
||||
/// #[multipart(limit = "2 KiB")]
|
||||
/// description: Text<String>,
|
||||
///
|
||||
/// #[multipart(limit = "512 MiB")]
|
||||
/// files: Vec<TempFile>,
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// # Unknown Fields
|
||||
///
|
||||
/// By default fields with an unknown name are ignored. They can be rejected using the
|
||||
/// `#[multipart(deny_unknown_fields)]` attribute:
|
||||
///
|
||||
/// ```
|
||||
/// # use actix_multipart::form::MultipartForm;
|
||||
/// #[derive(MultipartForm)]
|
||||
/// #[multipart(deny_unknown_fields)]
|
||||
/// struct Form { }
|
||||
/// ```
|
||||
///
|
||||
/// # Duplicate Fields
|
||||
///
|
||||
/// The behaviour for when multiple fields with the same name are received can be changed using the
|
||||
/// `#[multipart(duplicate_field = "<behavior>")]` attribute:
|
||||
///
|
||||
/// - "ignore": (default) Extra fields are ignored. I.e., the first one is persisted.
|
||||
/// - "deny": A `MultipartError::UnsupportedField` error response is returned.
|
||||
/// - "replace": Each field is processed, but only the last one is persisted.
|
||||
///
|
||||
/// Note that `Vec` fields will ignore this option.
|
||||
///
|
||||
/// ```
|
||||
/// # use actix_multipart::form::MultipartForm;
|
||||
/// #[derive(MultipartForm)]
|
||||
/// #[multipart(duplicate_field = "deny")]
|
||||
/// struct Form { }
|
||||
/// ```
|
||||
///
|
||||
/// [parse_size]: https://docs.rs/parse-size/1/parse_size
|
||||
#[proc_macro_derive(MultipartForm, attributes(multipart))]
|
||||
pub fn impl_multipart_form(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
|
||||
let input: syn::DeriveInput = parse_macro_input!(input);
|
||||
|
||||
let name = &input.ident;
|
||||
|
||||
let data_struct = match &input.data {
|
||||
syn::Data::Struct(data_struct) => data_struct,
|
||||
_ => {
|
||||
return compile_err(syn::Error::new(
|
||||
input.ident.span(),
|
||||
"`MultipartForm` can only be derived for structs",
|
||||
))
|
||||
}
|
||||
};
|
||||
|
||||
let fields = match &data_struct.fields {
|
||||
syn::Fields::Named(fields_named) => fields_named,
|
||||
_ => {
|
||||
return compile_err(syn::Error::new(
|
||||
input.ident.span(),
|
||||
"`MultipartForm` can only be derived for a struct with named fields",
|
||||
))
|
||||
}
|
||||
};
|
||||
|
||||
let attrs = match MultipartFormAttrs::from_derive_input(&input) {
|
||||
Ok(attrs) => attrs,
|
||||
Err(err) => return err.write_errors().into(),
|
||||
};
|
||||
|
||||
// Parse the field attributes
|
||||
let parsed = match fields
|
||||
.named
|
||||
.iter()
|
||||
.map(|field| {
|
||||
let rust_name = field.ident.as_ref().unwrap();
|
||||
let attrs = FieldAttrs::from_field(field).map_err(|err| err.write_errors())?;
|
||||
let serialization_name = attrs.rename.unwrap_or_else(|| rust_name.to_string());
|
||||
|
||||
let limit = match attrs.limit.map(|limit| match parse_size(&limit) {
|
||||
Ok(size) => Ok(usize::try_from(size).unwrap()),
|
||||
Err(err) => Err(syn::Error::new(
|
||||
field.ident.as_ref().unwrap().span(),
|
||||
format!("Could not parse size limit `{}`: {}", limit, err),
|
||||
)),
|
||||
}) {
|
||||
Some(Err(err)) => return Err(compile_err(err)),
|
||||
limit => limit.map(Result::unwrap),
|
||||
};
|
||||
|
||||
Ok(ParsedField {
|
||||
serialization_name,
|
||||
rust_name,
|
||||
limit,
|
||||
ty: &field.ty,
|
||||
})
|
||||
})
|
||||
.collect::<Result<Vec<_>, TokenStream>>()
|
||||
{
|
||||
Ok(attrs) => attrs,
|
||||
Err(err) => return err,
|
||||
};
|
||||
|
||||
// Check that field names are unique
|
||||
let mut set = HashSet::new();
|
||||
for field in &parsed {
|
||||
if !set.insert(field.serialization_name.clone()) {
|
||||
return compile_err(syn::Error::new(
|
||||
field.rust_name.span(),
|
||||
format!("Multiple fields named: `{}`", field.serialization_name),
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
// Return value when a field name is not supported by the form
|
||||
let unknown_field_result = if attrs.deny_unknown_fields {
|
||||
quote!(::std::result::Result::Err(
|
||||
::actix_multipart::MultipartError::UnsupportedField(field.name().to_string())
|
||||
))
|
||||
} else {
|
||||
quote!(::std::result::Result::Ok(()))
|
||||
};
|
||||
|
||||
// Value for duplicate action
|
||||
let duplicate_field = match attrs.duplicate_field {
|
||||
DuplicateField::Ignore => quote!(::actix_multipart::form::DuplicateField::Ignore),
|
||||
DuplicateField::Deny => quote!(::actix_multipart::form::DuplicateField::Deny),
|
||||
DuplicateField::Replace => quote!(::actix_multipart::form::DuplicateField::Replace),
|
||||
};
|
||||
|
||||
// limit() implementation
|
||||
let mut limit_impl = quote!();
|
||||
for field in &parsed {
|
||||
let name = &field.serialization_name;
|
||||
if let Some(value) = field.limit {
|
||||
limit_impl.extend(quote!(
|
||||
#name => ::std::option::Option::Some(#value),
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
// handle_field() implementation
|
||||
let mut handle_field_impl = quote!();
|
||||
for field in &parsed {
|
||||
let name = &field.serialization_name;
|
||||
let ty = &field.ty;
|
||||
|
||||
handle_field_impl.extend(quote!(
|
||||
#name => ::std::boxed::Box::pin(
|
||||
<#ty as ::actix_multipart::form::FieldGroupReader>::handle_field(req, field, limits, state, #duplicate_field)
|
||||
),
|
||||
));
|
||||
}
|
||||
|
||||
// from_state() implementation
|
||||
let mut from_state_impl = quote!();
|
||||
for field in &parsed {
|
||||
let name = &field.serialization_name;
|
||||
let rust_name = &field.rust_name;
|
||||
let ty = &field.ty;
|
||||
from_state_impl.extend(quote!(
|
||||
#rust_name: <#ty as ::actix_multipart::form::FieldGroupReader>::from_state(#name, &mut state)?,
|
||||
));
|
||||
}
|
||||
|
||||
let gen = quote! {
|
||||
impl ::actix_multipart::form::MultipartCollect for #name {
|
||||
fn limit(field_name: &str) -> ::std::option::Option<usize> {
|
||||
match field_name {
|
||||
#limit_impl
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
fn handle_field<'t>(
|
||||
req: &'t ::actix_web::HttpRequest,
|
||||
field: ::actix_multipart::Field,
|
||||
limits: &'t mut ::actix_multipart::form::Limits,
|
||||
state: &'t mut ::actix_multipart::form::State,
|
||||
) -> ::std::pin::Pin<::std::boxed::Box<dyn ::std::future::Future<Output = ::std::result::Result<(), ::actix_multipart::MultipartError>> + 't>> {
|
||||
match field.name() {
|
||||
#handle_field_impl
|
||||
_ => return ::std::boxed::Box::pin(::std::future::ready(#unknown_field_result)),
|
||||
}
|
||||
}
|
||||
|
||||
fn from_state(mut state: ::actix_multipart::form::State) -> ::std::result::Result<Self, ::actix_multipart::MultipartError> {
|
||||
Ok(Self {
|
||||
#from_state_impl
|
||||
})
|
||||
}
|
||||
|
||||
}
|
||||
};
|
||||
gen.into()
|
||||
}
|
||||
|
||||
/// Transform a syn error into a token stream for returning.
|
||||
fn compile_err(err: syn::Error) -> TokenStream {
|
||||
TokenStream::from(err.to_compile_error())
|
||||
}
|
16
actix-multipart-derive/tests/trybuild.rs
Normal file
16
actix-multipart-derive/tests/trybuild.rs
Normal file
@ -0,0 +1,16 @@
|
||||
#[rustversion::stable(1.59)] // MSRV
|
||||
#[test]
|
||||
fn compile_macros() {
|
||||
let t = trybuild::TestCases::new();
|
||||
|
||||
t.pass("tests/trybuild/all-required.rs");
|
||||
t.pass("tests/trybuild/optional-and-list.rs");
|
||||
t.pass("tests/trybuild/rename.rs");
|
||||
t.pass("tests/trybuild/deny-unknown.rs");
|
||||
|
||||
t.pass("tests/trybuild/deny-duplicates.rs");
|
||||
t.compile_fail("tests/trybuild/deny-parse-fail.rs");
|
||||
|
||||
t.pass("tests/trybuild/size-limits.rs");
|
||||
t.compile_fail("tests/trybuild/size-limit-parse-fail.rs");
|
||||
}
|
19
actix-multipart-derive/tests/trybuild/all-required.rs
Normal file
19
actix-multipart-derive/tests/trybuild/all-required.rs
Normal file
@ -0,0 +1,19 @@
|
||||
use actix_web::{web, App, Responder};
|
||||
|
||||
use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
|
||||
|
||||
#[derive(Debug, MultipartForm)]
|
||||
struct ImageUpload {
|
||||
description: Text<String>,
|
||||
timestamp: Text<i64>,
|
||||
image: TempFile,
|
||||
}
|
||||
|
||||
async fn handler(_form: MultipartForm<ImageUpload>) -> impl Responder {
|
||||
"Hello World!"
|
||||
}
|
||||
|
||||
#[actix_web::main]
|
||||
async fn main() {
|
||||
App::new().default_service(web::to(handler));
|
||||
}
|
16
actix-multipart-derive/tests/trybuild/deny-duplicates.rs
Normal file
16
actix-multipart-derive/tests/trybuild/deny-duplicates.rs
Normal file
@ -0,0 +1,16 @@
|
||||
use actix_web::{web, App, Responder};
|
||||
|
||||
use actix_multipart::form::MultipartForm;
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
#[multipart(duplicate_field = "deny")]
|
||||
struct Form {}
|
||||
|
||||
async fn handler(_form: MultipartForm<Form>) -> impl Responder {
|
||||
"Hello World!"
|
||||
}
|
||||
|
||||
#[actix_web::main]
|
||||
async fn main() {
|
||||
App::new().default_service(web::to(handler));
|
||||
}
|
7
actix-multipart-derive/tests/trybuild/deny-parse-fail.rs
Normal file
7
actix-multipart-derive/tests/trybuild/deny-parse-fail.rs
Normal file
@ -0,0 +1,7 @@
|
||||
use actix_multipart::form::MultipartForm;
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
#[multipart(duplicate_field = "no")]
|
||||
struct Form {}
|
||||
|
||||
fn main() {}
|
@ -0,0 +1,5 @@
|
||||
error: Unknown literal value `no`
|
||||
--> tests/trybuild/deny-parse-fail.rs:4:31
|
||||
|
|
||||
4 | #[multipart(duplicate_field = "no")]
|
||||
| ^^^^
|
16
actix-multipart-derive/tests/trybuild/deny-unknown.rs
Normal file
16
actix-multipart-derive/tests/trybuild/deny-unknown.rs
Normal file
@ -0,0 +1,16 @@
|
||||
use actix_web::{web, App, Responder};
|
||||
|
||||
use actix_multipart::form::MultipartForm;
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
#[multipart(deny_unknown_fields)]
|
||||
struct Form {}
|
||||
|
||||
async fn handler(_form: MultipartForm<Form>) -> impl Responder {
|
||||
"Hello World!"
|
||||
}
|
||||
|
||||
#[actix_web::main]
|
||||
async fn main() {
|
||||
App::new().default_service(web::to(handler));
|
||||
}
|
18
actix-multipart-derive/tests/trybuild/optional-and-list.rs
Normal file
18
actix-multipart-derive/tests/trybuild/optional-and-list.rs
Normal file
@ -0,0 +1,18 @@
|
||||
use actix_web::{web, App, Responder};
|
||||
|
||||
use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct Form {
|
||||
category: Option<Text<String>>,
|
||||
files: Vec<TempFile>,
|
||||
}
|
||||
|
||||
async fn handler(_form: MultipartForm<Form>) -> impl Responder {
|
||||
"Hello World!"
|
||||
}
|
||||
|
||||
#[actix_web::main]
|
||||
async fn main() {
|
||||
App::new().default_service(web::to(handler));
|
||||
}
|
18
actix-multipart-derive/tests/trybuild/rename.rs
Normal file
18
actix-multipart-derive/tests/trybuild/rename.rs
Normal file
@ -0,0 +1,18 @@
|
||||
use actix_web::{web, App, Responder};
|
||||
|
||||
use actix_multipart::form::{tempfile::TempFile, MultipartForm};
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct Form {
|
||||
#[multipart(rename = "files[]")]
|
||||
files: Vec<TempFile>,
|
||||
}
|
||||
|
||||
async fn handler(_form: MultipartForm<Form>) -> impl Responder {
|
||||
"Hello World!"
|
||||
}
|
||||
|
||||
#[actix_web::main]
|
||||
async fn main() {
|
||||
App::new().default_service(web::to(handler));
|
||||
}
|
@ -0,0 +1,21 @@
|
||||
use actix_multipart::form::{text::Text, MultipartForm};
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct Form {
|
||||
#[multipart(limit = "2 bytes")]
|
||||
description: Text<String>,
|
||||
}
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct Form2 {
|
||||
#[multipart(limit = "2 megabytes")]
|
||||
description: Text<String>,
|
||||
}
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct Form3 {
|
||||
#[multipart(limit = "four meters")]
|
||||
description: Text<String>,
|
||||
}
|
||||
|
||||
fn main() {}
|
@ -0,0 +1,17 @@
|
||||
error: Could not parse size limit `2 bytes`: invalid digit found in string
|
||||
--> tests/trybuild/size-limit-parse-fail.rs:6:5
|
||||
|
|
||||
6 | description: Text<String>,
|
||||
| ^^^^^^^^^^^
|
||||
|
||||
error: Could not parse size limit `2 megabytes`: invalid digit found in string
|
||||
--> tests/trybuild/size-limit-parse-fail.rs:12:5
|
||||
|
|
||||
12 | description: Text<String>,
|
||||
| ^^^^^^^^^^^
|
||||
|
||||
error: Could not parse size limit `four meters`: invalid digit found in string
|
||||
--> tests/trybuild/size-limit-parse-fail.rs:18:5
|
||||
|
|
||||
18 | description: Text<String>,
|
||||
| ^^^^^^^^^^^
|
21
actix-multipart-derive/tests/trybuild/size-limits.rs
Normal file
21
actix-multipart-derive/tests/trybuild/size-limits.rs
Normal file
@ -0,0 +1,21 @@
|
||||
use actix_web::{web, App, Responder};
|
||||
|
||||
use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct Form {
|
||||
#[multipart(limit = "2 KiB")]
|
||||
description: Text<String>,
|
||||
|
||||
#[multipart(limit = "512 MiB")]
|
||||
files: Vec<TempFile>,
|
||||
}
|
||||
|
||||
async fn handler(_form: MultipartForm<Form>) -> impl Responder {
|
||||
"Hello World!"
|
||||
}
|
||||
|
||||
#[actix_web::main]
|
||||
async fn main() {
|
||||
App::new().default_service(web::to(handler));
|
||||
}
|
@ -1,36 +1,48 @@
|
||||
# Changes
|
||||
|
||||
## Unreleased - 2021-xx-xx
|
||||
- Minimum supported Rust version (MSRV) is now 1.56 due to transitive `hashbrown` dependency.
|
||||
## Unreleased - 2023-xx-xx
|
||||
|
||||
## 0.6.0 - 2023-02-26
|
||||
|
||||
- Added `MultipartForm` typed data extractor. [#2883]
|
||||
|
||||
[#2883]: https://github.com/actix/actix-web/pull/2883
|
||||
|
||||
## 0.5.0 - 2023-01-21
|
||||
|
||||
- `Field::content_type()` now returns `Option<&mime::Mime>`. [#2885]
|
||||
- Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
|
||||
|
||||
[#2885]: https://github.com/actix/actix-web/pull/2885
|
||||
|
||||
## 0.4.0 - 2022-02-25
|
||||
|
||||
- No significant changes since `0.4.0-beta.13`.
|
||||
|
||||
|
||||
## 0.4.0-beta.13 - 2022-01-31
|
||||
|
||||
- No significant changes since `0.4.0-beta.12`.
|
||||
|
||||
|
||||
## 0.4.0-beta.12 - 2022-01-04
|
||||
|
||||
- Minimum supported Rust version (MSRV) is now 1.54.
|
||||
|
||||
|
||||
## 0.4.0-beta.11 - 2021-12-27
|
||||
|
||||
- No significant changes since `0.4.0-beta.10`.
|
||||
|
||||
|
||||
## 0.4.0-beta.10 - 2021-12-11
|
||||
|
||||
- No significant changes since `0.4.0-beta.9`.
|
||||
|
||||
|
||||
## 0.4.0-beta.9 - 2021-12-01
|
||||
|
||||
- Polling `Field` after dropping `Multipart` now fails immediately instead of hanging forever. [#2463]
|
||||
|
||||
[#2463]: https://github.com/actix/actix-web/pull/2463
|
||||
|
||||
|
||||
## 0.4.0-beta.8 - 2021-11-22
|
||||
|
||||
- Ensure a correct Content-Disposition header is included in every part of a multipart message. [#2451]
|
||||
- Added `MultipartError::NoContentDisposition` variant. [#2451]
|
||||
- Since Content-Disposition is now ensured, `Field::content_disposition` is now infallible. [#2451]
|
||||
@ -40,52 +52,52 @@
|
||||
|
||||
[#2451]: https://github.com/actix/actix-web/pull/2451
|
||||
|
||||
|
||||
## 0.4.0-beta.7 - 2021-10-20
|
||||
|
||||
- Minimum supported Rust version (MSRV) is now 1.52.
|
||||
|
||||
|
||||
## 0.4.0-beta.6 - 2021-09-09
|
||||
|
||||
- Minimum supported Rust version (MSRV) is now 1.51.
|
||||
|
||||
|
||||
## 0.4.0-beta.5 - 2021-06-17
|
||||
- No notable changes.
|
||||
|
||||
- No notable changes.
|
||||
|
||||
## 0.4.0-beta.4 - 2021-04-02
|
||||
- No notable changes.
|
||||
|
||||
- No notable changes.
|
||||
|
||||
## 0.4.0-beta.3 - 2021-03-09
|
||||
- No notable changes.
|
||||
|
||||
- No notable changes.
|
||||
|
||||
## 0.4.0-beta.2 - 2021-02-10
|
||||
|
||||
- No notable changes.
|
||||
|
||||
|
||||
## 0.4.0-beta.1 - 2021-01-07
|
||||
|
||||
- Fix multipart consuming payload before header checks. [#1513]
|
||||
- Update `bytes` to `1.0`. [#1813]
|
||||
|
||||
[#1813]: https://github.com/actix/actix-web/pull/1813
|
||||
[#1513]: https://github.com/actix/actix-web/pull/1513
|
||||
|
||||
|
||||
## 0.3.0 - 2020-09-11
|
||||
|
||||
- No significant changes from `0.3.0-beta.2`.
|
||||
|
||||
|
||||
## 0.3.0-beta.2 - 2020-09-10
|
||||
|
||||
- Update `actix-*` dependencies to latest versions.
|
||||
|
||||
|
||||
## 0.3.0-beta.1 - 2020-07-15
|
||||
|
||||
- Update `actix-web` to 3.0.0-beta.1
|
||||
|
||||
|
||||
## 0.3.0-alpha.1 - 2020-05-25
|
||||
|
||||
- Update `actix-web` to 3.0.0-alpha.3
|
||||
- Bump minimum supported Rust version to 1.40
|
||||
- Minimize `futures` dependencies
|
||||
|
@ -1,7 +1,10 @@
|
||||
[package]
|
||||
name = "actix-multipart"
|
||||
version = "0.4.0"
|
||||
authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
|
||||
version = "0.6.0"
|
||||
authors = [
|
||||
"Nikolay Kim <fafhrd91@gmail.com>",
|
||||
"Jacob Halsey <jacob@jhalsey.com>",
|
||||
]
|
||||
description = "Multipart form support for Actix Web"
|
||||
keywords = ["http", "web", "framework", "async", "futures"]
|
||||
homepage = "https://actix.rs"
|
||||
@ -9,26 +12,42 @@ repository = "https://github.com/actix/actix-web.git"
|
||||
license = "MIT OR Apache-2.0"
|
||||
edition = "2018"
|
||||
|
||||
[lib]
|
||||
name = "actix_multipart"
|
||||
path = "src/lib.rs"
|
||||
[package.metadata.docs.rs]
|
||||
rustdoc-args = ["--cfg", "docsrs"]
|
||||
all-features = true
|
||||
|
||||
[features]
|
||||
default = ["tempfile", "derive"]
|
||||
derive = ["actix-multipart-derive"]
|
||||
tempfile = ["tempfile-dep", "tokio/fs"]
|
||||
|
||||
[dependencies]
|
||||
actix-multipart-derive = { version = "=0.6.0", optional = true }
|
||||
actix-utils = "3"
|
||||
actix-web = { version = "4", default-features = false }
|
||||
|
||||
bytes = "1"
|
||||
derive_more = "0.99.5"
|
||||
futures-core = { version = "0.3.7", default-features = false, features = ["alloc"] }
|
||||
futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] }
|
||||
futures-util = { version = "0.3.17", default-features = false, features = ["alloc"] }
|
||||
httparse = "1.3"
|
||||
local-waker = "0.1"
|
||||
log = "0.4"
|
||||
memchr = "2.5"
|
||||
mime = "0.3"
|
||||
twoway = "0.2"
|
||||
serde = "1"
|
||||
serde_json = "1"
|
||||
serde_plain = "1"
|
||||
# TODO(MSRV 1.60): replace with dep: prefix
|
||||
tempfile-dep = { package = "tempfile", version = "3.4", optional = true }
|
||||
tokio = { version = "1.24.2", features = ["sync"] }
|
||||
|
||||
[dev-dependencies]
|
||||
actix-http = "3"
|
||||
actix-multipart-rfc7578 = "0.10"
|
||||
actix-rt = "2.2"
|
||||
actix-http = "3.0.0"
|
||||
futures-util = { version = "0.3.7", default-features = false, features = ["alloc"] }
|
||||
tokio = { version = "1.8.4", features = ["sync"] }
|
||||
actix-test = "0.1"
|
||||
awc = "3"
|
||||
futures-util = { version = "0.3.17", default-features = false, features = ["alloc"] }
|
||||
tokio = { version = "1.24.2", features = ["sync"] }
|
||||
tokio-stream = "0.1"
|
||||
|
@ -3,15 +3,15 @@
|
||||
> Multipart form support for Actix Web.
|
||||
|
||||
[](https://crates.io/crates/actix-multipart)
|
||||
[](https://docs.rs/actix-multipart/0.4.0)
|
||||

|
||||
[](https://docs.rs/actix-multipart/0.6.0)
|
||||

|
||||

|
||||
<br />
|
||||
[](https://deps.rs/crate/actix-multipart/0.4.0)
|
||||
[](https://deps.rs/crate/actix-multipart/0.6.0)
|
||||
[](https://crates.io/crates/actix-multipart)
|
||||
[](https://discord.gg/NWpN5mmg3x)
|
||||
|
||||
## Documentation & Resources
|
||||
|
||||
- [API Documentation](https://docs.rs/actix-multipart)
|
||||
- Minimum Supported Rust Version (MSRV): 1.54
|
||||
- Minimum Supported Rust Version (MSRV): 1.59
|
||||
|
@ -1,12 +1,15 @@
|
||||
//! Error and Result module
|
||||
use actix_web::error::{ParseError, PayloadError};
|
||||
use actix_web::http::StatusCode;
|
||||
use actix_web::ResponseError;
|
||||
|
||||
use actix_web::{
|
||||
error::{ParseError, PayloadError},
|
||||
http::StatusCode,
|
||||
ResponseError,
|
||||
};
|
||||
use derive_more::{Display, Error, From};
|
||||
|
||||
/// A set of errors that can occur during parsing multipart streams
|
||||
#[non_exhaustive]
|
||||
/// A set of errors that can occur during parsing multipart streams.
|
||||
#[derive(Debug, Display, From, Error)]
|
||||
#[non_exhaustive]
|
||||
pub enum MultipartError {
|
||||
/// Content-Disposition header is not found or is not equal to "form-data".
|
||||
///
|
||||
@ -46,12 +49,41 @@ pub enum MultipartError {
|
||||
/// Not consumed
|
||||
#[display(fmt = "Multipart stream is not consumed")]
|
||||
NotConsumed,
|
||||
|
||||
/// An error from a field handler in a form
|
||||
#[display(
|
||||
fmt = "An error occurred processing field `{}`: {}",
|
||||
field_name,
|
||||
source
|
||||
)]
|
||||
Field {
|
||||
field_name: String,
|
||||
source: actix_web::Error,
|
||||
},
|
||||
|
||||
/// Duplicate field
|
||||
#[display(fmt = "Duplicate field found for: `{}`", _0)]
|
||||
#[from(ignore)]
|
||||
DuplicateField(#[error(not(source))] String),
|
||||
|
||||
/// Missing field
|
||||
#[display(fmt = "Field with name `{}` is required", _0)]
|
||||
#[from(ignore)]
|
||||
MissingField(#[error(not(source))] String),
|
||||
|
||||
/// Unknown field
|
||||
#[display(fmt = "Unsupported field `{}`", _0)]
|
||||
#[from(ignore)]
|
||||
UnsupportedField(#[error(not(source))] String),
|
||||
}
|
||||
|
||||
/// Return `BadRequest` for `MultipartError`
|
||||
impl ResponseError for MultipartError {
|
||||
fn status_code(&self) -> StatusCode {
|
||||
StatusCode::BAD_REQUEST
|
||||
match &self {
|
||||
MultipartError::Field { source, .. } => source.as_response_error().status_code(),
|
||||
_ => StatusCode::BAD_REQUEST,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -9,12 +9,11 @@ use crate::server::Multipart;
|
||||
///
|
||||
/// Content-type: multipart/form-data;
|
||||
///
|
||||
/// ## Server example
|
||||
///
|
||||
/// # Examples
|
||||
/// ```
|
||||
/// use actix_web::{web, HttpResponse, Error};
|
||||
/// use actix_multipart::Multipart;
|
||||
/// use futures_util::stream::StreamExt as _;
|
||||
/// use futures_util::StreamExt as _;
|
||||
///
|
||||
/// async fn index(mut payload: Multipart) -> Result<HttpResponse, Error> {
|
||||
/// // iterate over multipart stream
|
||||
|
53
actix-multipart/src/form/bytes.rs
Normal file
53
actix-multipart/src/form/bytes.rs
Normal file
@ -0,0 +1,53 @@
|
||||
//! Reads a field into memory.
|
||||
|
||||
use actix_web::HttpRequest;
|
||||
use bytes::BytesMut;
|
||||
use futures_core::future::LocalBoxFuture;
|
||||
use futures_util::TryStreamExt as _;
|
||||
use mime::Mime;
|
||||
|
||||
use crate::{
|
||||
form::{FieldReader, Limits},
|
||||
Field, MultipartError,
|
||||
};
|
||||
|
||||
/// Read the field into memory.
|
||||
#[derive(Debug)]
|
||||
pub struct Bytes {
|
||||
/// The data.
|
||||
pub data: bytes::Bytes,
|
||||
|
||||
/// The value of the `Content-Type` header.
|
||||
pub content_type: Option<Mime>,
|
||||
|
||||
/// The `filename` value in the `Content-Disposition` header.
|
||||
pub file_name: Option<String>,
|
||||
}
|
||||
|
||||
impl<'t> FieldReader<'t> for Bytes {
|
||||
type Future = LocalBoxFuture<'t, Result<Self, MultipartError>>;
|
||||
|
||||
fn read_field(
|
||||
_: &'t HttpRequest,
|
||||
mut field: Field,
|
||||
limits: &'t mut Limits,
|
||||
) -> Self::Future {
|
||||
Box::pin(async move {
|
||||
let mut buf = BytesMut::with_capacity(131_072);
|
||||
|
||||
while let Some(chunk) = field.try_next().await? {
|
||||
limits.try_consume_limits(chunk.len(), true)?;
|
||||
buf.extend(chunk);
|
||||
}
|
||||
|
||||
Ok(Bytes {
|
||||
data: buf.freeze(),
|
||||
content_type: field.content_type().map(ToOwned::to_owned),
|
||||
file_name: field
|
||||
.content_disposition()
|
||||
.get_filename()
|
||||
.map(str::to_owned),
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
195
actix-multipart/src/form/json.rs
Normal file
195
actix-multipart/src/form/json.rs
Normal file
@ -0,0 +1,195 @@
|
||||
//! Deserializes a field as JSON.
|
||||
|
||||
use std::sync::Arc;
|
||||
|
||||
use actix_web::{http::StatusCode, web, Error, HttpRequest, ResponseError};
|
||||
use derive_more::{Deref, DerefMut, Display, Error};
|
||||
use futures_core::future::LocalBoxFuture;
|
||||
use serde::de::DeserializeOwned;
|
||||
|
||||
use crate::{
|
||||
form::{bytes::Bytes, FieldReader, Limits},
|
||||
Field, MultipartError,
|
||||
};
|
||||
|
||||
use super::FieldErrorHandler;
|
||||
|
||||
/// Deserialize from JSON.
|
||||
#[derive(Debug, Deref, DerefMut)]
|
||||
pub struct Json<T: DeserializeOwned>(pub T);
|
||||
|
||||
impl<T: DeserializeOwned> Json<T> {
|
||||
pub fn into_inner(self) -> T {
|
||||
self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl<'t, T> FieldReader<'t> for Json<T>
|
||||
where
|
||||
T: DeserializeOwned + 'static,
|
||||
{
|
||||
type Future = LocalBoxFuture<'t, Result<Self, MultipartError>>;
|
||||
|
||||
fn read_field(req: &'t HttpRequest, field: Field, limits: &'t mut Limits) -> Self::Future {
|
||||
Box::pin(async move {
|
||||
let config = JsonConfig::from_req(req);
|
||||
let field_name = field.name().to_owned();
|
||||
|
||||
if config.validate_content_type {
|
||||
let valid = if let Some(mime) = field.content_type() {
|
||||
mime.subtype() == mime::JSON || mime.suffix() == Some(mime::JSON)
|
||||
} else {
|
||||
false
|
||||
};
|
||||
|
||||
if !valid {
|
||||
return Err(MultipartError::Field {
|
||||
field_name,
|
||||
source: config.map_error(req, JsonFieldError::ContentType),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
let bytes = Bytes::read_field(req, field, limits).await?;
|
||||
|
||||
Ok(Json(serde_json::from_slice(bytes.data.as_ref()).map_err(
|
||||
|err| MultipartError::Field {
|
||||
field_name,
|
||||
source: config.map_error(req, JsonFieldError::Deserialize(err)),
|
||||
},
|
||||
)?))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Display, Error)]
|
||||
#[non_exhaustive]
|
||||
pub enum JsonFieldError {
|
||||
/// Deserialize error.
|
||||
#[display(fmt = "Json deserialize error: {}", _0)]
|
||||
Deserialize(serde_json::Error),
|
||||
|
||||
/// Content type error.
|
||||
#[display(fmt = "Content type error")]
|
||||
ContentType,
|
||||
}
|
||||
|
||||
impl ResponseError for JsonFieldError {
|
||||
fn status_code(&self) -> StatusCode {
|
||||
StatusCode::BAD_REQUEST
|
||||
}
|
||||
}
|
||||
|
||||
/// Configuration for the [`Json`] field reader.
|
||||
#[derive(Clone)]
|
||||
pub struct JsonConfig {
|
||||
err_handler: FieldErrorHandler<JsonFieldError>,
|
||||
validate_content_type: bool,
|
||||
}
|
||||
|
||||
const DEFAULT_CONFIG: JsonConfig = JsonConfig {
|
||||
err_handler: None,
|
||||
validate_content_type: true,
|
||||
};
|
||||
|
||||
impl JsonConfig {
|
||||
pub fn error_handler<F>(mut self, f: F) -> Self
|
||||
where
|
||||
F: Fn(JsonFieldError, &HttpRequest) -> Error + Send + Sync + 'static,
|
||||
{
|
||||
self.err_handler = Some(Arc::new(f));
|
||||
self
|
||||
}
|
||||
|
||||
/// Extract payload config from app data. Check both `T` and `Data<T>`, in that order, and fall
|
||||
/// back to the default payload config.
|
||||
fn from_req(req: &HttpRequest) -> &Self {
|
||||
req.app_data::<Self>()
|
||||
.or_else(|| req.app_data::<web::Data<Self>>().map(|d| d.as_ref()))
|
||||
.unwrap_or(&DEFAULT_CONFIG)
|
||||
}
|
||||
|
||||
fn map_error(&self, req: &HttpRequest, err: JsonFieldError) -> Error {
|
||||
if let Some(err_handler) = self.err_handler.as_ref() {
|
||||
(*err_handler)(err, req)
|
||||
} else {
|
||||
err.into()
|
||||
}
|
||||
}
|
||||
|
||||
/// Sets whether or not the field must have a valid `Content-Type` header to be parsed.
|
||||
pub fn validate_content_type(mut self, validate_content_type: bool) -> Self {
|
||||
self.validate_content_type = validate_content_type;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for JsonConfig {
|
||||
fn default() -> Self {
|
||||
DEFAULT_CONFIG
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::{collections::HashMap, io::Cursor};
|
||||
|
||||
use actix_multipart_rfc7578::client::multipart;
|
||||
use actix_web::{http::StatusCode, web, App, HttpResponse, Responder};
|
||||
|
||||
use crate::form::{
|
||||
json::{Json, JsonConfig},
|
||||
tests::send_form,
|
||||
MultipartForm,
|
||||
};
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct JsonForm {
|
||||
json: Json<HashMap<String, String>>,
|
||||
}
|
||||
|
||||
async fn test_json_route(form: MultipartForm<JsonForm>) -> impl Responder {
|
||||
let mut expected = HashMap::new();
|
||||
expected.insert("key1".to_owned(), "value1".to_owned());
|
||||
expected.insert("key2".to_owned(), "value2".to_owned());
|
||||
assert_eq!(&*form.json, &expected);
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_json_without_content_type() {
|
||||
let srv = actix_test::start(|| {
|
||||
App::new()
|
||||
.route("/", web::post().to(test_json_route))
|
||||
.app_data(JsonConfig::default().validate_content_type(false))
|
||||
});
|
||||
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("json", "{\"key1\": \"value1\", \"key2\": \"value2\"}");
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_content_type_validation() {
|
||||
let srv = actix_test::start(|| {
|
||||
App::new()
|
||||
.route("/", web::post().to(test_json_route))
|
||||
.app_data(JsonConfig::default().validate_content_type(true))
|
||||
});
|
||||
|
||||
// Deny because wrong content type
|
||||
let bytes = Cursor::new("{\"key1\": \"value1\", \"key2\": \"value2\"}");
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_reader_file_with_mime("json", bytes, "", mime::APPLICATION_OCTET_STREAM);
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
|
||||
// Allow because correct content type
|
||||
let bytes = Cursor::new("{\"key1\": \"value1\", \"key2\": \"value2\"}");
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_reader_file_with_mime("json", bytes, "", mime::APPLICATION_JSON);
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
}
|
742
actix-multipart/src/form/mod.rs
Normal file
742
actix-multipart/src/form/mod.rs
Normal file
@ -0,0 +1,742 @@
|
||||
//! Process and extract typed data from a multipart stream.
|
||||
|
||||
use std::{
|
||||
any::Any,
|
||||
collections::HashMap,
|
||||
future::{ready, Future},
|
||||
sync::Arc,
|
||||
};
|
||||
|
||||
use actix_web::{dev, error::PayloadError, web, Error, FromRequest, HttpRequest};
|
||||
use derive_more::{Deref, DerefMut};
|
||||
use futures_core::future::LocalBoxFuture;
|
||||
use futures_util::{TryFutureExt as _, TryStreamExt as _};
|
||||
|
||||
use crate::{Field, Multipart, MultipartError};
|
||||
|
||||
pub mod bytes;
|
||||
pub mod json;
|
||||
#[cfg(feature = "tempfile")]
|
||||
pub mod tempfile;
|
||||
pub mod text;
|
||||
|
||||
#[cfg(feature = "derive")]
|
||||
pub use actix_multipart_derive::MultipartForm;
|
||||
|
||||
type FieldErrorHandler<T> = Option<Arc<dyn Fn(T, &HttpRequest) -> Error + Send + Sync>>;
|
||||
|
||||
/// Trait that data types to be used in a multipart form struct should implement.
|
||||
///
|
||||
/// It represents an asynchronous handler that processes a multipart field to produce `Self`.
|
||||
pub trait FieldReader<'t>: Sized + Any {
|
||||
/// Future that resolves to a `Self`.
|
||||
type Future: Future<Output = Result<Self, MultipartError>>;
|
||||
|
||||
/// The form will call this function to handle the field.
|
||||
fn read_field(req: &'t HttpRequest, field: Field, limits: &'t mut Limits) -> Self::Future;
|
||||
}
|
||||
|
||||
/// Used to accumulate the state of the loaded fields.
|
||||
#[doc(hidden)]
|
||||
#[derive(Default, Deref, DerefMut)]
|
||||
pub struct State(pub HashMap<String, Box<dyn Any>>);
|
||||
|
||||
/// Trait that the field collection types implement, i.e. `Vec<T>`, `Option<T>`, or `T` itself.
|
||||
#[doc(hidden)]
|
||||
pub trait FieldGroupReader<'t>: Sized + Any {
|
||||
type Future: Future<Output = Result<(), MultipartError>>;
|
||||
|
||||
/// The form will call this function for each matching field.
|
||||
fn handle_field(
|
||||
req: &'t HttpRequest,
|
||||
field: Field,
|
||||
limits: &'t mut Limits,
|
||||
state: &'t mut State,
|
||||
duplicate_field: DuplicateField,
|
||||
) -> Self::Future;
|
||||
|
||||
/// Construct `Self` from the group of processed fields.
|
||||
fn from_state(name: &str, state: &'t mut State) -> Result<Self, MultipartError>;
|
||||
}
|
||||
|
||||
impl<'t, T> FieldGroupReader<'t> for Option<T>
|
||||
where
|
||||
T: FieldReader<'t>,
|
||||
{
|
||||
type Future = LocalBoxFuture<'t, Result<(), MultipartError>>;
|
||||
|
||||
fn handle_field(
|
||||
req: &'t HttpRequest,
|
||||
field: Field,
|
||||
limits: &'t mut Limits,
|
||||
state: &'t mut State,
|
||||
duplicate_field: DuplicateField,
|
||||
) -> Self::Future {
|
||||
if state.contains_key(field.name()) {
|
||||
match duplicate_field {
|
||||
DuplicateField::Ignore => return Box::pin(ready(Ok(()))),
|
||||
|
||||
DuplicateField::Deny => {
|
||||
return Box::pin(ready(Err(MultipartError::DuplicateField(
|
||||
field.name().to_owned(),
|
||||
))))
|
||||
}
|
||||
|
||||
DuplicateField::Replace => {}
|
||||
}
|
||||
}
|
||||
|
||||
Box::pin(async move {
|
||||
let field_name = field.name().to_owned();
|
||||
let t = T::read_field(req, field, limits).await?;
|
||||
state.insert(field_name, Box::new(t));
|
||||
Ok(())
|
||||
})
|
||||
}
|
||||
|
||||
fn from_state(name: &str, state: &'t mut State) -> Result<Self, MultipartError> {
|
||||
Ok(state.remove(name).map(|m| *m.downcast::<T>().unwrap()))
|
||||
}
|
||||
}
|
||||
|
||||
impl<'t, T> FieldGroupReader<'t> for Vec<T>
|
||||
where
|
||||
T: FieldReader<'t>,
|
||||
{
|
||||
type Future = LocalBoxFuture<'t, Result<(), MultipartError>>;
|
||||
|
||||
fn handle_field(
|
||||
req: &'t HttpRequest,
|
||||
field: Field,
|
||||
limits: &'t mut Limits,
|
||||
state: &'t mut State,
|
||||
_duplicate_field: DuplicateField,
|
||||
) -> Self::Future {
|
||||
Box::pin(async move {
|
||||
// Note: Vec GroupReader always allows duplicates
|
||||
|
||||
let field_name = field.name().to_owned();
|
||||
|
||||
let vec = state
|
||||
.entry(field_name)
|
||||
.or_insert_with(|| Box::<Vec<T>>::default())
|
||||
.downcast_mut::<Vec<T>>()
|
||||
.unwrap();
|
||||
|
||||
let item = T::read_field(req, field, limits).await?;
|
||||
vec.push(item);
|
||||
|
||||
Ok(())
|
||||
})
|
||||
}
|
||||
|
||||
fn from_state(name: &str, state: &'t mut State) -> Result<Self, MultipartError> {
|
||||
Ok(state
|
||||
.remove(name)
|
||||
.map(|m| *m.downcast::<Vec<T>>().unwrap())
|
||||
.unwrap_or_default())
|
||||
}
|
||||
}
|
||||
|
||||
impl<'t, T> FieldGroupReader<'t> for T
|
||||
where
|
||||
T: FieldReader<'t>,
|
||||
{
|
||||
type Future = LocalBoxFuture<'t, Result<(), MultipartError>>;
|
||||
|
||||
fn handle_field(
|
||||
req: &'t HttpRequest,
|
||||
field: Field,
|
||||
limits: &'t mut Limits,
|
||||
state: &'t mut State,
|
||||
duplicate_field: DuplicateField,
|
||||
) -> Self::Future {
|
||||
if state.contains_key(field.name()) {
|
||||
match duplicate_field {
|
||||
DuplicateField::Ignore => return Box::pin(ready(Ok(()))),
|
||||
|
||||
DuplicateField::Deny => {
|
||||
return Box::pin(ready(Err(MultipartError::DuplicateField(
|
||||
field.name().to_owned(),
|
||||
))))
|
||||
}
|
||||
|
||||
DuplicateField::Replace => {}
|
||||
}
|
||||
}
|
||||
|
||||
Box::pin(async move {
|
||||
let field_name = field.name().to_owned();
|
||||
let t = T::read_field(req, field, limits).await?;
|
||||
state.insert(field_name, Box::new(t));
|
||||
Ok(())
|
||||
})
|
||||
}
|
||||
|
||||
fn from_state(name: &str, state: &'t mut State) -> Result<Self, MultipartError> {
|
||||
state
|
||||
.remove(name)
|
||||
.map(|m| *m.downcast::<T>().unwrap())
|
||||
.ok_or_else(|| MultipartError::MissingField(name.to_owned()))
|
||||
}
|
||||
}
|
||||
|
||||
/// Trait that allows a type to be used in the [`struct@MultipartForm`] extractor.
|
||||
///
|
||||
/// You should use the [`macro@MultipartForm`] macro to derive this for your struct.
|
||||
pub trait MultipartCollect: Sized {
|
||||
/// An optional limit in bytes to be applied a given field name. Note this limit will be shared
|
||||
/// across all fields sharing the same name.
|
||||
fn limit(field_name: &str) -> Option<usize>;
|
||||
|
||||
/// The extractor will call this function for each incoming field, the state can be updated
|
||||
/// with the processed field data.
|
||||
fn handle_field<'t>(
|
||||
req: &'t HttpRequest,
|
||||
field: Field,
|
||||
limits: &'t mut Limits,
|
||||
state: &'t mut State,
|
||||
) -> LocalBoxFuture<'t, Result<(), MultipartError>>;
|
||||
|
||||
/// Once all the fields have been processed and stored in the state, this is called
|
||||
/// to convert into the struct representation.
|
||||
fn from_state(state: State) -> Result<Self, MultipartError>;
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub enum DuplicateField {
|
||||
/// Additional fields are not processed.
|
||||
Ignore,
|
||||
|
||||
/// An error will be raised.
|
||||
Deny,
|
||||
|
||||
/// All fields will be processed, the last one will replace all previous.
|
||||
Replace,
|
||||
}
|
||||
|
||||
/// Used to keep track of the remaining limits for the form and current field.
|
||||
pub struct Limits {
|
||||
pub total_limit_remaining: usize,
|
||||
pub memory_limit_remaining: usize,
|
||||
pub field_limit_remaining: Option<usize>,
|
||||
}
|
||||
|
||||
impl Limits {
|
||||
pub fn new(total_limit: usize, memory_limit: usize) -> Self {
|
||||
Self {
|
||||
total_limit_remaining: total_limit,
|
||||
memory_limit_remaining: memory_limit,
|
||||
field_limit_remaining: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// This function should be called within a [`FieldReader`] when reading each chunk of a field
|
||||
/// to ensure that the form limits are not exceeded.
|
||||
///
|
||||
/// # Arguments
|
||||
///
|
||||
/// * `bytes` - The number of bytes being read from this chunk
|
||||
/// * `in_memory` - Whether to consume from the memory limits
|
||||
pub fn try_consume_limits(
|
||||
&mut self,
|
||||
bytes: usize,
|
||||
in_memory: bool,
|
||||
) -> Result<(), MultipartError> {
|
||||
self.total_limit_remaining = self
|
||||
.total_limit_remaining
|
||||
.checked_sub(bytes)
|
||||
.ok_or(MultipartError::Payload(PayloadError::Overflow))?;
|
||||
|
||||
if in_memory {
|
||||
self.memory_limit_remaining = self
|
||||
.memory_limit_remaining
|
||||
.checked_sub(bytes)
|
||||
.ok_or(MultipartError::Payload(PayloadError::Overflow))?;
|
||||
}
|
||||
|
||||
if let Some(field_limit) = self.field_limit_remaining {
|
||||
self.field_limit_remaining = Some(
|
||||
field_limit
|
||||
.checked_sub(bytes)
|
||||
.ok_or(MultipartError::Payload(PayloadError::Overflow))?,
|
||||
);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Typed `multipart/form-data` extractor.
|
||||
///
|
||||
/// To extract typed data from a multipart stream, the inner type `T` must implement the
|
||||
/// [`MultipartCollect`] trait. You should use the [`macro@MultipartForm`] macro to derive this
|
||||
/// for your struct.
|
||||
///
|
||||
/// Add a [`MultipartFormConfig`] to your app data to configure extraction.
|
||||
#[derive(Deref, DerefMut)]
|
||||
pub struct MultipartForm<T: MultipartCollect>(pub T);
|
||||
|
||||
impl<T: MultipartCollect> MultipartForm<T> {
|
||||
/// Unwrap into inner `T` value.
|
||||
pub fn into_inner(self) -> T {
|
||||
self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> FromRequest for MultipartForm<T>
|
||||
where
|
||||
T: MultipartCollect,
|
||||
{
|
||||
type Error = Error;
|
||||
type Future = LocalBoxFuture<'static, Result<Self, Self::Error>>;
|
||||
|
||||
#[inline]
|
||||
fn from_request(req: &HttpRequest, payload: &mut dev::Payload) -> Self::Future {
|
||||
let mut payload = Multipart::new(req.headers(), payload.take());
|
||||
|
||||
let config = MultipartFormConfig::from_req(req);
|
||||
let mut limits = Limits::new(config.total_limit, config.memory_limit);
|
||||
|
||||
let req = req.clone();
|
||||
let req2 = req.clone();
|
||||
let err_handler = config.err_handler.clone();
|
||||
|
||||
Box::pin(
|
||||
async move {
|
||||
let mut state = State::default();
|
||||
// We need to ensure field limits are shared for all instances of this field name
|
||||
let mut field_limits = HashMap::<String, Option<usize>>::new();
|
||||
|
||||
while let Some(field) = payload.try_next().await? {
|
||||
// Retrieve the limit for this field
|
||||
let entry = field_limits
|
||||
.entry(field.name().to_owned())
|
||||
.or_insert_with(|| T::limit(field.name()));
|
||||
limits.field_limit_remaining = entry.to_owned();
|
||||
|
||||
T::handle_field(&req, field, &mut limits, &mut state).await?;
|
||||
|
||||
// Update the stored limit
|
||||
*entry = limits.field_limit_remaining;
|
||||
}
|
||||
let inner = T::from_state(state)?;
|
||||
Ok(MultipartForm(inner))
|
||||
}
|
||||
.map_err(move |err| {
|
||||
if let Some(handler) = err_handler {
|
||||
(*handler)(err, &req2)
|
||||
} else {
|
||||
err.into()
|
||||
}
|
||||
}),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
type MultipartFormErrorHandler =
|
||||
Option<Arc<dyn Fn(MultipartError, &HttpRequest) -> Error + Send + Sync>>;
|
||||
|
||||
/// [`struct@MultipartForm`] extractor configuration.
|
||||
///
|
||||
/// Add to your app data to have it picked up by [`struct@MultipartForm`] extractors.
|
||||
#[derive(Clone)]
|
||||
pub struct MultipartFormConfig {
|
||||
total_limit: usize,
|
||||
memory_limit: usize,
|
||||
err_handler: MultipartFormErrorHandler,
|
||||
}
|
||||
|
||||
impl MultipartFormConfig {
|
||||
/// Sets maximum accepted payload size for the entire form. By default this limit is 50MiB.
|
||||
pub fn total_limit(mut self, total_limit: usize) -> Self {
|
||||
self.total_limit = total_limit;
|
||||
self
|
||||
}
|
||||
|
||||
/// Sets maximum accepted data that will be read into memory. By default this limit is 2MiB.
|
||||
pub fn memory_limit(mut self, memory_limit: usize) -> Self {
|
||||
self.memory_limit = memory_limit;
|
||||
self
|
||||
}
|
||||
|
||||
/// Sets custom error handler.
|
||||
pub fn error_handler<F>(mut self, f: F) -> Self
|
||||
where
|
||||
F: Fn(MultipartError, &HttpRequest) -> Error + Send + Sync + 'static,
|
||||
{
|
||||
self.err_handler = Some(Arc::new(f));
|
||||
self
|
||||
}
|
||||
|
||||
/// Extracts payload config from app data. Check both `T` and `Data<T>`, in that order, and fall
|
||||
/// back to the default payload config.
|
||||
fn from_req(req: &HttpRequest) -> &Self {
|
||||
req.app_data::<Self>()
|
||||
.or_else(|| req.app_data::<web::Data<Self>>().map(|d| d.as_ref()))
|
||||
.unwrap_or(&DEFAULT_CONFIG)
|
||||
}
|
||||
}
|
||||
|
||||
const DEFAULT_CONFIG: MultipartFormConfig = MultipartFormConfig {
|
||||
total_limit: 52_428_800, // 50 MiB
|
||||
memory_limit: 2_097_152, // 2 MiB
|
||||
err_handler: None,
|
||||
};
|
||||
|
||||
impl Default for MultipartFormConfig {
|
||||
fn default() -> Self {
|
||||
DEFAULT_CONFIG
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use actix_http::encoding::Decoder;
|
||||
use actix_multipart_rfc7578::client::multipart;
|
||||
use actix_test::TestServer;
|
||||
use actix_web::{dev::Payload, http::StatusCode, web, App, HttpResponse, Responder};
|
||||
use awc::{Client, ClientResponse};
|
||||
|
||||
use super::MultipartForm;
|
||||
use crate::form::{bytes::Bytes, tempfile::TempFile, text::Text, MultipartFormConfig};
|
||||
|
||||
pub async fn send_form(
|
||||
srv: &TestServer,
|
||||
form: multipart::Form<'static>,
|
||||
uri: &'static str,
|
||||
) -> ClientResponse<Decoder<Payload>> {
|
||||
Client::default()
|
||||
.post(srv.url(uri))
|
||||
.content_type(form.content_type())
|
||||
.send_body(multipart::Body::from(form))
|
||||
.await
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
/// Test `Option` fields.
|
||||
#[derive(MultipartForm)]
|
||||
struct TestOptions {
|
||||
field1: Option<Text<String>>,
|
||||
field2: Option<Text<String>>,
|
||||
}
|
||||
|
||||
async fn test_options_route(form: MultipartForm<TestOptions>) -> impl Responder {
|
||||
assert!(form.field1.is_some());
|
||||
assert!(form.field2.is_none());
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_options() {
|
||||
let srv =
|
||||
actix_test::start(|| App::new().route("/", web::post().to(test_options_route)));
|
||||
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field1", "value");
|
||||
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
/// Test `Vec` fields.
|
||||
#[derive(MultipartForm)]
|
||||
struct TestVec {
|
||||
list1: Vec<Text<String>>,
|
||||
list2: Vec<Text<String>>,
|
||||
}
|
||||
|
||||
async fn test_vec_route(form: MultipartForm<TestVec>) -> impl Responder {
|
||||
let form = form.into_inner();
|
||||
let strings = form
|
||||
.list1
|
||||
.into_iter()
|
||||
.map(|s| s.into_inner())
|
||||
.collect::<Vec<_>>();
|
||||
assert_eq!(strings, vec!["value1", "value2", "value3"]);
|
||||
assert_eq!(form.list2.len(), 0);
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_vec() {
|
||||
let srv = actix_test::start(|| App::new().route("/", web::post().to(test_vec_route)));
|
||||
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("list1", "value1");
|
||||
form.add_text("list1", "value2");
|
||||
form.add_text("list1", "value3");
|
||||
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
/// Test the `rename` field attribute.
|
||||
#[derive(MultipartForm)]
|
||||
struct TestFieldRenaming {
|
||||
#[multipart(rename = "renamed")]
|
||||
field1: Text<String>,
|
||||
#[multipart(rename = "field1")]
|
||||
field2: Text<String>,
|
||||
field3: Text<String>,
|
||||
}
|
||||
|
||||
async fn test_field_renaming_route(
|
||||
form: MultipartForm<TestFieldRenaming>,
|
||||
) -> impl Responder {
|
||||
assert_eq!(&*form.field1, "renamed");
|
||||
assert_eq!(&*form.field2, "field1");
|
||||
assert_eq!(&*form.field3, "field3");
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_field_renaming() {
|
||||
let srv = actix_test::start(|| {
|
||||
App::new().route("/", web::post().to(test_field_renaming_route))
|
||||
});
|
||||
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("renamed", "renamed");
|
||||
form.add_text("field1", "field1");
|
||||
form.add_text("field3", "field3");
|
||||
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
/// Test the `deny_unknown_fields` struct attribute.
|
||||
#[derive(MultipartForm)]
|
||||
#[multipart(deny_unknown_fields)]
|
||||
struct TestDenyUnknown {}
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct TestAllowUnknown {}
|
||||
|
||||
async fn test_deny_unknown_route(_: MultipartForm<TestDenyUnknown>) -> impl Responder {
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
async fn test_allow_unknown_route(_: MultipartForm<TestAllowUnknown>) -> impl Responder {
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_deny_unknown() {
|
||||
let srv = actix_test::start(|| {
|
||||
App::new()
|
||||
.route("/deny", web::post().to(test_deny_unknown_route))
|
||||
.route("/allow", web::post().to(test_allow_unknown_route))
|
||||
});
|
||||
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("unknown", "value");
|
||||
let response = send_form(&srv, form, "/deny").await;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("unknown", "value");
|
||||
let response = send_form(&srv, form, "/allow").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
/// Test the `duplicate_field` struct attribute.
|
||||
#[derive(MultipartForm)]
|
||||
#[multipart(duplicate_field = "deny")]
|
||||
struct TestDuplicateDeny {
|
||||
_field: Text<String>,
|
||||
}
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
#[multipart(duplicate_field = "replace")]
|
||||
struct TestDuplicateReplace {
|
||||
field: Text<String>,
|
||||
}
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
#[multipart(duplicate_field = "ignore")]
|
||||
struct TestDuplicateIgnore {
|
||||
field: Text<String>,
|
||||
}
|
||||
|
||||
async fn test_duplicate_deny_route(_: MultipartForm<TestDuplicateDeny>) -> impl Responder {
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
async fn test_duplicate_replace_route(
|
||||
form: MultipartForm<TestDuplicateReplace>,
|
||||
) -> impl Responder {
|
||||
assert_eq!(&*form.field, "second_value");
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
async fn test_duplicate_ignore_route(
|
||||
form: MultipartForm<TestDuplicateIgnore>,
|
||||
) -> impl Responder {
|
||||
assert_eq!(&*form.field, "first_value");
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_duplicate_field() {
|
||||
let srv = actix_test::start(|| {
|
||||
App::new()
|
||||
.route("/deny", web::post().to(test_duplicate_deny_route))
|
||||
.route("/replace", web::post().to(test_duplicate_replace_route))
|
||||
.route("/ignore", web::post().to(test_duplicate_ignore_route))
|
||||
});
|
||||
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("_field", "first_value");
|
||||
form.add_text("_field", "second_value");
|
||||
let response = send_form(&srv, form, "/deny").await;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "first_value");
|
||||
form.add_text("field", "second_value");
|
||||
let response = send_form(&srv, form, "/replace").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "first_value");
|
||||
form.add_text("field", "second_value");
|
||||
let response = send_form(&srv, form, "/ignore").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
/// Test the Limits.
|
||||
#[derive(MultipartForm)]
|
||||
struct TestMemoryUploadLimits {
|
||||
field: Bytes,
|
||||
}
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct TestFileUploadLimits {
|
||||
field: TempFile,
|
||||
}
|
||||
|
||||
async fn test_upload_limits_memory(
|
||||
form: MultipartForm<TestMemoryUploadLimits>,
|
||||
) -> impl Responder {
|
||||
assert!(!form.field.data.is_empty());
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
async fn test_upload_limits_file(
|
||||
form: MultipartForm<TestFileUploadLimits>,
|
||||
) -> impl Responder {
|
||||
assert!(form.field.size > 0);
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_memory_limits() {
|
||||
let srv = actix_test::start(|| {
|
||||
App::new()
|
||||
.route("/text", web::post().to(test_upload_limits_memory))
|
||||
.route("/file", web::post().to(test_upload_limits_file))
|
||||
.app_data(
|
||||
MultipartFormConfig::default()
|
||||
.memory_limit(20)
|
||||
.total_limit(usize::MAX),
|
||||
)
|
||||
});
|
||||
|
||||
// Exceeds the 20 byte memory limit
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "this string is 28 bytes long");
|
||||
let response = send_form(&srv, form, "/text").await;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
|
||||
// Memory limit should not apply when the data is being streamed to disk
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "this string is 28 bytes long");
|
||||
let response = send_form(&srv, form, "/file").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_total_limit() {
|
||||
let srv = actix_test::start(|| {
|
||||
App::new()
|
||||
.route("/text", web::post().to(test_upload_limits_memory))
|
||||
.route("/file", web::post().to(test_upload_limits_file))
|
||||
.app_data(
|
||||
MultipartFormConfig::default()
|
||||
.memory_limit(usize::MAX)
|
||||
.total_limit(20),
|
||||
)
|
||||
});
|
||||
|
||||
// Within the 20 byte limit
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "7 bytes");
|
||||
let response = send_form(&srv, form, "/text").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
|
||||
// Exceeds the 20 byte overall limit
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "this string is 28 bytes long");
|
||||
let response = send_form(&srv, form, "/text").await;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
|
||||
// Exceeds the 20 byte overall limit
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "this string is 28 bytes long");
|
||||
let response = send_form(&srv, form, "/file").await;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
}
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct TestFieldLevelLimits {
|
||||
#[multipart(limit = "30B")]
|
||||
field: Vec<Bytes>,
|
||||
}
|
||||
|
||||
async fn test_field_level_limits_route(
|
||||
form: MultipartForm<TestFieldLevelLimits>,
|
||||
) -> impl Responder {
|
||||
assert!(!form.field.is_empty());
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_field_level_limits() {
|
||||
let srv = actix_test::start(|| {
|
||||
App::new()
|
||||
.route("/", web::post().to(test_field_level_limits_route))
|
||||
.app_data(
|
||||
MultipartFormConfig::default()
|
||||
.memory_limit(usize::MAX)
|
||||
.total_limit(usize::MAX),
|
||||
)
|
||||
});
|
||||
|
||||
// Within the 30 byte limit
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "this string is 28 bytes long");
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
|
||||
// Exceeds the the 30 byte limit
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "this string is more than 30 bytes long");
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
|
||||
// Total of values (14 bytes) is within 30 byte limit for "field"
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "7 bytes");
|
||||
form.add_text("field", "7 bytes");
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
|
||||
// Total of values exceeds 30 byte limit for "field"
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_text("field", "this string is 28 bytes long");
|
||||
form.add_text("field", "this string is 28 bytes long");
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
}
|
||||
}
|
206
actix-multipart/src/form/tempfile.rs
Normal file
206
actix-multipart/src/form/tempfile.rs
Normal file
@ -0,0 +1,206 @@
|
||||
//! Writes a field to a temporary file on disk.
|
||||
|
||||
use std::{
|
||||
io,
|
||||
path::{Path, PathBuf},
|
||||
sync::Arc,
|
||||
};
|
||||
|
||||
use actix_web::{http::StatusCode, web, Error, HttpRequest, ResponseError};
|
||||
use derive_more::{Display, Error};
|
||||
use futures_core::future::LocalBoxFuture;
|
||||
use futures_util::TryStreamExt as _;
|
||||
use mime::Mime;
|
||||
use tempfile_dep::NamedTempFile;
|
||||
use tokio::io::AsyncWriteExt;
|
||||
|
||||
use super::FieldErrorHandler;
|
||||
use crate::{
|
||||
form::{FieldReader, Limits},
|
||||
Field, MultipartError,
|
||||
};
|
||||
|
||||
/// Write the field to a temporary file on disk.
|
||||
#[derive(Debug)]
|
||||
pub struct TempFile {
|
||||
/// The temporary file on disk.
|
||||
pub file: NamedTempFile,
|
||||
|
||||
/// The value of the `content-type` header.
|
||||
pub content_type: Option<Mime>,
|
||||
|
||||
/// The `filename` value in the `content-disposition` header.
|
||||
pub file_name: Option<String>,
|
||||
|
||||
/// The size in bytes of the file.
|
||||
pub size: usize,
|
||||
}
|
||||
|
||||
impl<'t> FieldReader<'t> for TempFile {
|
||||
type Future = LocalBoxFuture<'t, Result<Self, MultipartError>>;
|
||||
|
||||
fn read_field(
|
||||
req: &'t HttpRequest,
|
||||
mut field: Field,
|
||||
limits: &'t mut Limits,
|
||||
) -> Self::Future {
|
||||
Box::pin(async move {
|
||||
let config = TempFileConfig::from_req(req);
|
||||
let field_name = field.name().to_owned();
|
||||
let mut size = 0;
|
||||
|
||||
let file = config.create_tempfile().map_err(|err| {
|
||||
config.map_error(req, &field_name, TempFileError::FileIo(err))
|
||||
})?;
|
||||
|
||||
let mut file_async = tokio::fs::File::from_std(file.reopen().map_err(|err| {
|
||||
config.map_error(req, &field_name, TempFileError::FileIo(err))
|
||||
})?);
|
||||
|
||||
while let Some(chunk) = field.try_next().await? {
|
||||
limits.try_consume_limits(chunk.len(), false)?;
|
||||
size += chunk.len();
|
||||
file_async.write_all(chunk.as_ref()).await.map_err(|err| {
|
||||
config.map_error(req, &field_name, TempFileError::FileIo(err))
|
||||
})?;
|
||||
}
|
||||
|
||||
file_async.flush().await.map_err(|err| {
|
||||
config.map_error(req, &field_name, TempFileError::FileIo(err))
|
||||
})?;
|
||||
|
||||
Ok(TempFile {
|
||||
file,
|
||||
content_type: field.content_type().map(ToOwned::to_owned),
|
||||
file_name: field
|
||||
.content_disposition()
|
||||
.get_filename()
|
||||
.map(str::to_owned),
|
||||
size,
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Display, Error)]
|
||||
#[non_exhaustive]
|
||||
pub enum TempFileError {
|
||||
/// File I/O Error
|
||||
#[display(fmt = "File I/O error: {}", _0)]
|
||||
FileIo(std::io::Error),
|
||||
}
|
||||
|
||||
impl ResponseError for TempFileError {
|
||||
fn status_code(&self) -> StatusCode {
|
||||
StatusCode::INTERNAL_SERVER_ERROR
|
||||
}
|
||||
}
|
||||
|
||||
/// Configuration for the [`TempFile`] field reader.
|
||||
#[derive(Clone)]
|
||||
pub struct TempFileConfig {
|
||||
err_handler: FieldErrorHandler<TempFileError>,
|
||||
directory: Option<PathBuf>,
|
||||
}
|
||||
|
||||
impl TempFileConfig {
|
||||
fn create_tempfile(&self) -> io::Result<NamedTempFile> {
|
||||
if let Some(ref dir) = self.directory {
|
||||
NamedTempFile::new_in(dir)
|
||||
} else {
|
||||
NamedTempFile::new()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl TempFileConfig {
|
||||
/// Sets custom error handler.
|
||||
pub fn error_handler<F>(mut self, f: F) -> Self
|
||||
where
|
||||
F: Fn(TempFileError, &HttpRequest) -> Error + Send + Sync + 'static,
|
||||
{
|
||||
self.err_handler = Some(Arc::new(f));
|
||||
self
|
||||
}
|
||||
|
||||
/// Extracts payload config from app data. Check both `T` and `Data<T>`, in that order, and fall
|
||||
/// back to the default payload config.
|
||||
fn from_req(req: &HttpRequest) -> &Self {
|
||||
req.app_data::<Self>()
|
||||
.or_else(|| req.app_data::<web::Data<Self>>().map(|d| d.as_ref()))
|
||||
.unwrap_or(&DEFAULT_CONFIG)
|
||||
}
|
||||
|
||||
fn map_error(
|
||||
&self,
|
||||
req: &HttpRequest,
|
||||
field_name: &str,
|
||||
err: TempFileError,
|
||||
) -> MultipartError {
|
||||
let source = if let Some(ref err_handler) = self.err_handler {
|
||||
(err_handler)(err, req)
|
||||
} else {
|
||||
err.into()
|
||||
};
|
||||
|
||||
MultipartError::Field {
|
||||
field_name: field_name.to_owned(),
|
||||
source,
|
||||
}
|
||||
}
|
||||
|
||||
/// Sets the directory that temp files will be created in.
|
||||
///
|
||||
/// The default temporary file location is platform dependent.
|
||||
pub fn directory(mut self, dir: impl AsRef<Path>) -> Self {
|
||||
self.directory = Some(dir.as_ref().to_owned());
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
const DEFAULT_CONFIG: TempFileConfig = TempFileConfig {
|
||||
err_handler: None,
|
||||
directory: None,
|
||||
};
|
||||
|
||||
impl Default for TempFileConfig {
|
||||
fn default() -> Self {
|
||||
DEFAULT_CONFIG
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::io::{Cursor, Read};
|
||||
|
||||
use actix_multipart_rfc7578::client::multipart;
|
||||
use actix_web::{http::StatusCode, web, App, HttpResponse, Responder};
|
||||
|
||||
use crate::form::{tempfile::TempFile, tests::send_form, MultipartForm};
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct FileForm {
|
||||
file: TempFile,
|
||||
}
|
||||
|
||||
async fn test_file_route(form: MultipartForm<FileForm>) -> impl Responder {
|
||||
let mut form = form.into_inner();
|
||||
let mut contents = String::new();
|
||||
form.file.file.read_to_string(&mut contents).unwrap();
|
||||
assert_eq!(contents, "Hello, world!");
|
||||
assert_eq!(form.file.file_name.unwrap(), "testfile.txt");
|
||||
assert_eq!(form.file.content_type.unwrap(), mime::TEXT_PLAIN);
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_file_upload() {
|
||||
let srv = actix_test::start(|| App::new().route("/", web::post().to(test_file_route)));
|
||||
|
||||
let mut form = multipart::Form::default();
|
||||
let bytes = Cursor::new("Hello, world!");
|
||||
form.add_reader_file_with_mime("file", bytes, "testfile.txt", mime::TEXT_PLAIN);
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
}
|
196
actix-multipart/src/form/text.rs
Normal file
196
actix-multipart/src/form/text.rs
Normal file
@ -0,0 +1,196 @@
|
||||
//! Deserializes a field from plain text.
|
||||
|
||||
use std::{str, sync::Arc};
|
||||
|
||||
use actix_web::{http::StatusCode, web, Error, HttpRequest, ResponseError};
|
||||
use derive_more::{Deref, DerefMut, Display, Error};
|
||||
use futures_core::future::LocalBoxFuture;
|
||||
use serde::de::DeserializeOwned;
|
||||
|
||||
use super::FieldErrorHandler;
|
||||
use crate::{
|
||||
form::{bytes::Bytes, FieldReader, Limits},
|
||||
Field, MultipartError,
|
||||
};
|
||||
|
||||
/// Deserialize from plain text.
|
||||
///
|
||||
/// Internally this uses [`serde_plain`] for deserialization, which supports primitive types
|
||||
/// including strings, numbers, and simple enums.
|
||||
#[derive(Debug, Deref, DerefMut)]
|
||||
pub struct Text<T: DeserializeOwned>(pub T);
|
||||
|
||||
impl<T: DeserializeOwned> Text<T> {
|
||||
/// Unwraps into inner value.
|
||||
pub fn into_inner(self) -> T {
|
||||
self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl<'t, T> FieldReader<'t> for Text<T>
|
||||
where
|
||||
T: DeserializeOwned + 'static,
|
||||
{
|
||||
type Future = LocalBoxFuture<'t, Result<Self, MultipartError>>;
|
||||
|
||||
fn read_field(req: &'t HttpRequest, field: Field, limits: &'t mut Limits) -> Self::Future {
|
||||
Box::pin(async move {
|
||||
let config = TextConfig::from_req(req);
|
||||
let field_name = field.name().to_owned();
|
||||
|
||||
if config.validate_content_type {
|
||||
let valid = if let Some(mime) = field.content_type() {
|
||||
mime.subtype() == mime::PLAIN || mime.suffix() == Some(mime::PLAIN)
|
||||
} else {
|
||||
// https://datatracker.ietf.org/doc/html/rfc7578#section-4.4
|
||||
// content type defaults to text/plain, so None should be considered valid
|
||||
true
|
||||
};
|
||||
|
||||
if !valid {
|
||||
return Err(MultipartError::Field {
|
||||
field_name,
|
||||
source: config.map_error(req, TextError::ContentType),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
let bytes = Bytes::read_field(req, field, limits).await?;
|
||||
|
||||
let text = str::from_utf8(&bytes.data).map_err(|err| MultipartError::Field {
|
||||
field_name: field_name.clone(),
|
||||
source: config.map_error(req, TextError::Utf8Error(err)),
|
||||
})?;
|
||||
|
||||
Ok(Text(serde_plain::from_str(text).map_err(|err| {
|
||||
MultipartError::Field {
|
||||
field_name,
|
||||
source: config.map_error(req, TextError::Deserialize(err)),
|
||||
}
|
||||
})?))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Display, Error)]
|
||||
#[non_exhaustive]
|
||||
pub enum TextError {
|
||||
/// UTF-8 decoding error.
|
||||
#[display(fmt = "UTF-8 decoding error: {}", _0)]
|
||||
Utf8Error(str::Utf8Error),
|
||||
|
||||
/// Deserialize error.
|
||||
#[display(fmt = "Plain text deserialize error: {}", _0)]
|
||||
Deserialize(serde_plain::Error),
|
||||
|
||||
/// Content type error.
|
||||
#[display(fmt = "Content type error")]
|
||||
ContentType,
|
||||
}
|
||||
|
||||
impl ResponseError for TextError {
|
||||
fn status_code(&self) -> StatusCode {
|
||||
StatusCode::BAD_REQUEST
|
||||
}
|
||||
}
|
||||
|
||||
/// Configuration for the [`Text`] field reader.
|
||||
#[derive(Clone)]
|
||||
pub struct TextConfig {
|
||||
err_handler: FieldErrorHandler<TextError>,
|
||||
validate_content_type: bool,
|
||||
}
|
||||
|
||||
impl TextConfig {
|
||||
/// Sets custom error handler.
|
||||
pub fn error_handler<F>(mut self, f: F) -> Self
|
||||
where
|
||||
F: Fn(TextError, &HttpRequest) -> Error + Send + Sync + 'static,
|
||||
{
|
||||
self.err_handler = Some(Arc::new(f));
|
||||
self
|
||||
}
|
||||
|
||||
/// Extracts payload config from app data. Check both `T` and `Data<T>`, in that order, and fall
|
||||
/// back to the default payload config.
|
||||
fn from_req(req: &HttpRequest) -> &Self {
|
||||
req.app_data::<Self>()
|
||||
.or_else(|| req.app_data::<web::Data<Self>>().map(|d| d.as_ref()))
|
||||
.unwrap_or(&DEFAULT_CONFIG)
|
||||
}
|
||||
|
||||
fn map_error(&self, req: &HttpRequest, err: TextError) -> Error {
|
||||
if let Some(ref err_handler) = self.err_handler {
|
||||
(err_handler)(err, req)
|
||||
} else {
|
||||
err.into()
|
||||
}
|
||||
}
|
||||
|
||||
/// Sets whether or not the field must have a valid `Content-Type` header to be parsed.
|
||||
///
|
||||
/// Note that an empty `Content-Type` is also accepted, as the multipart specification defines
|
||||
/// `text/plain` as the default for text fields.
|
||||
pub fn validate_content_type(mut self, validate_content_type: bool) -> Self {
|
||||
self.validate_content_type = validate_content_type;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
const DEFAULT_CONFIG: TextConfig = TextConfig {
|
||||
err_handler: None,
|
||||
validate_content_type: true,
|
||||
};
|
||||
|
||||
impl Default for TextConfig {
|
||||
fn default() -> Self {
|
||||
DEFAULT_CONFIG
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::io::Cursor;
|
||||
|
||||
use actix_multipart_rfc7578::client::multipart;
|
||||
use actix_web::{http::StatusCode, web, App, HttpResponse, Responder};
|
||||
|
||||
use crate::form::{
|
||||
tests::send_form,
|
||||
text::{Text, TextConfig},
|
||||
MultipartForm,
|
||||
};
|
||||
|
||||
#[derive(MultipartForm)]
|
||||
struct TextForm {
|
||||
number: Text<i32>,
|
||||
}
|
||||
|
||||
async fn test_text_route(form: MultipartForm<TextForm>) -> impl Responder {
|
||||
assert_eq!(*form.number, 1025);
|
||||
HttpResponse::Ok().finish()
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_content_type_validation() {
|
||||
let srv = actix_test::start(|| {
|
||||
App::new()
|
||||
.route("/", web::post().to(test_text_route))
|
||||
.app_data(TextConfig::default().validate_content_type(true))
|
||||
});
|
||||
|
||||
// Deny because wrong content type
|
||||
let bytes = Cursor::new("1025");
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_reader_file_with_mime("number", bytes, "", mime::APPLICATION_OCTET_STREAM);
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
|
||||
// Allow because correct content type
|
||||
let bytes = Cursor::new("1025");
|
||||
let mut form = multipart::Form::default();
|
||||
form.add_reader_file_with_mime("number", bytes, "", mime::TEXT_PLAIN);
|
||||
let response = send_form(&srv, form, "/").await;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
}
|
||||
}
|
@ -2,11 +2,20 @@
|
||||
|
||||
#![deny(rust_2018_idioms, nonstandard_style)]
|
||||
#![warn(future_incompatible)]
|
||||
#![allow(clippy::borrow_interior_mutable_const)]
|
||||
#![allow(clippy::borrow_interior_mutable_const, clippy::uninlined_format_args)]
|
||||
#![doc(html_logo_url = "https://actix.rs/img/logo.png")]
|
||||
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
|
||||
// This allows us to use the actix_multipart_derive within this crate's tests
|
||||
#[cfg(test)]
|
||||
extern crate self as actix_multipart;
|
||||
|
||||
mod error;
|
||||
mod extractor;
|
||||
mod server;
|
||||
|
||||
pub mod form;
|
||||
|
||||
pub use self::error::MultipartError;
|
||||
pub use self::server::{Field, Multipart};
|
||||
|
@ -270,7 +270,9 @@ impl InnerMultipart {
|
||||
match field.borrow_mut().poll(safety) {
|
||||
Poll::Pending => return Poll::Pending,
|
||||
Poll::Ready(Some(Ok(_))) => continue,
|
||||
Poll::Ready(Some(Err(e))) => return Poll::Ready(Some(Err(e))),
|
||||
Poll::Ready(Some(Err(err))) => {
|
||||
return Poll::Ready(Some(Err(err)))
|
||||
}
|
||||
Poll::Ready(None) => true,
|
||||
}
|
||||
}
|
||||
@ -289,10 +291,8 @@ impl InnerMultipart {
|
||||
match self.state {
|
||||
// read until first boundary
|
||||
InnerState::FirstBoundary => {
|
||||
match InnerMultipart::skip_until_boundary(
|
||||
&mut *payload,
|
||||
&self.boundary,
|
||||
)? {
|
||||
match InnerMultipart::skip_until_boundary(&mut payload, &self.boundary)?
|
||||
{
|
||||
Some(eof) => {
|
||||
if eof {
|
||||
self.state = InnerState::Eof;
|
||||
@ -306,7 +306,7 @@ impl InnerMultipart {
|
||||
}
|
||||
// read boundary
|
||||
InnerState::Boundary => {
|
||||
match InnerMultipart::read_boundary(&mut *payload, &self.boundary)? {
|
||||
match InnerMultipart::read_boundary(&mut payload, &self.boundary)? {
|
||||
None => return Poll::Pending,
|
||||
Some(eof) => {
|
||||
if eof {
|
||||
@ -323,7 +323,7 @@ impl InnerMultipart {
|
||||
|
||||
// read field headers for next field
|
||||
if self.state == InnerState::Headers {
|
||||
if let Some(headers) = InnerMultipart::read_headers(&mut *payload)? {
|
||||
if let Some(headers) = InnerMultipart::read_headers(&mut payload)? {
|
||||
self.state = InnerState::Boundary;
|
||||
headers
|
||||
} else {
|
||||
@ -361,17 +361,18 @@ impl InnerMultipart {
|
||||
return Poll::Ready(Some(Err(MultipartError::NoContentDisposition)));
|
||||
};
|
||||
|
||||
let ct: mime::Mime = headers
|
||||
let ct: Option<mime::Mime> = headers
|
||||
.get(&header::CONTENT_TYPE)
|
||||
.and_then(|ct| ct.to_str().ok())
|
||||
.and_then(|ct| ct.parse().ok())
|
||||
.unwrap_or(mime::APPLICATION_OCTET_STREAM);
|
||||
.and_then(|ct| ct.parse().ok());
|
||||
|
||||
self.state = InnerState::Boundary;
|
||||
|
||||
// nested multipart stream is not supported
|
||||
if ct.type_() == mime::MULTIPART {
|
||||
return Poll::Ready(Some(Err(MultipartError::Nested)));
|
||||
if let Some(mime) = &ct {
|
||||
if mime.type_() == mime::MULTIPART {
|
||||
return Poll::Ready(Some(Err(MultipartError::Nested)));
|
||||
}
|
||||
}
|
||||
|
||||
let field =
|
||||
@ -399,7 +400,7 @@ impl Drop for InnerMultipart {
|
||||
|
||||
/// A single field in a multipart stream
|
||||
pub struct Field {
|
||||
ct: mime::Mime,
|
||||
ct: Option<mime::Mime>,
|
||||
cd: ContentDisposition,
|
||||
headers: HeaderMap,
|
||||
inner: Rc<RefCell<InnerField>>,
|
||||
@ -410,7 +411,7 @@ impl Field {
|
||||
fn new(
|
||||
safety: Safety,
|
||||
headers: HeaderMap,
|
||||
ct: mime::Mime,
|
||||
ct: Option<mime::Mime>,
|
||||
cd: ContentDisposition,
|
||||
inner: Rc<RefCell<InnerField>>,
|
||||
) -> Self {
|
||||
@ -428,9 +429,13 @@ impl Field {
|
||||
&self.headers
|
||||
}
|
||||
|
||||
/// Returns a reference to the field's content (mime) type.
|
||||
pub fn content_type(&self) -> &mime::Mime {
|
||||
&self.ct
|
||||
/// Returns a reference to the field's content (mime) type, if it is supplied by the client.
|
||||
///
|
||||
/// According to [RFC 7578](https://www.rfc-editor.org/rfc/rfc7578#section-4.4), if it is not
|
||||
/// present, it should default to "text/plain". Note it is the responsibility of the client to
|
||||
/// provide the appropriate content type, there is no attempt to validate this by the server.
|
||||
pub fn content_type(&self) -> Option<&mime::Mime> {
|
||||
self.ct.as_ref()
|
||||
}
|
||||
|
||||
/// Returns the field's Content-Disposition.
|
||||
@ -482,7 +487,11 @@ impl Stream for Field {
|
||||
|
||||
impl fmt::Debug for Field {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
writeln!(f, "\nField: {}", self.ct)?;
|
||||
if let Some(ct) = &self.ct {
|
||||
writeln!(f, "\nField: {}", ct)?;
|
||||
} else {
|
||||
writeln!(f, "\nField:")?;
|
||||
}
|
||||
writeln!(f, " boundary: {}", self.inner.borrow().boundary)?;
|
||||
writeln!(f, " headers:")?;
|
||||
for (key, val) in self.headers.iter() {
|
||||
@ -599,7 +608,7 @@ impl InnerField {
|
||||
}
|
||||
|
||||
loop {
|
||||
return if let Some(idx) = twoway::find_bytes(&payload.buf[pos..], b"\r") {
|
||||
return if let Some(idx) = memchr::memmem::find(&payload.buf[pos..], b"\r") {
|
||||
let cur = pos + idx;
|
||||
|
||||
// check if we have enough data for boundary detection
|
||||
@ -643,15 +652,15 @@ impl InnerField {
|
||||
let result = if let Some(mut payload) = self.payload.as_ref().unwrap().get_mut(s) {
|
||||
if !self.eof {
|
||||
let res = if let Some(ref mut len) = self.length {
|
||||
InnerField::read_len(&mut *payload, len)
|
||||
InnerField::read_len(&mut payload, len)
|
||||
} else {
|
||||
InnerField::read_stream(&mut *payload, &self.boundary)
|
||||
InnerField::read_stream(&mut payload, &self.boundary)
|
||||
};
|
||||
|
||||
match res {
|
||||
Poll::Pending => return Poll::Pending,
|
||||
Poll::Ready(Some(Ok(bytes))) => return Poll::Ready(Some(Ok(bytes))),
|
||||
Poll::Ready(Some(Err(e))) => return Poll::Ready(Some(Err(e))),
|
||||
Poll::Ready(Some(Err(err))) => return Poll::Ready(Some(Err(err))),
|
||||
Poll::Ready(None) => self.eof = true,
|
||||
}
|
||||
}
|
||||
@ -666,7 +675,7 @@ impl InnerField {
|
||||
}
|
||||
Poll::Ready(None)
|
||||
}
|
||||
Err(e) => Poll::Ready(Some(Err(e))),
|
||||
Err(err) => Poll::Ready(Some(Err(err))),
|
||||
}
|
||||
} else {
|
||||
Poll::Pending
|
||||
@ -787,7 +796,7 @@ impl PayloadBuffer {
|
||||
loop {
|
||||
match Pin::new(&mut self.stream).poll_next(cx) {
|
||||
Poll::Ready(Some(Ok(data))) => self.buf.extend_from_slice(&data),
|
||||
Poll::Ready(Some(Err(e))) => return Err(e),
|
||||
Poll::Ready(Some(Err(err))) => return Err(err),
|
||||
Poll::Ready(None) => {
|
||||
self.eof = true;
|
||||
return Ok(());
|
||||
@ -820,7 +829,7 @@ impl PayloadBuffer {
|
||||
|
||||
/// Read until specified ending
|
||||
fn read_until(&mut self, line: &[u8]) -> Result<Option<Bytes>, MultipartError> {
|
||||
let res = twoway::find_bytes(&self.buf, line)
|
||||
let res = memchr::memmem::find(&self.buf, line)
|
||||
.map(|idx| self.buf.split_to(idx + line.len()).freeze());
|
||||
|
||||
if res.is_none() && self.eof {
|
||||
@ -853,19 +862,22 @@ impl PayloadBuffer {
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
use actix_http::h1::Payload;
|
||||
use actix_web::http::header::{DispositionParam, DispositionType};
|
||||
use actix_web::rt;
|
||||
use actix_web::test::TestRequest;
|
||||
use actix_web::FromRequest;
|
||||
use bytes::Bytes;
|
||||
use futures_util::{future::lazy, StreamExt};
|
||||
use std::time::Duration;
|
||||
|
||||
use actix_http::h1;
|
||||
use actix_web::{
|
||||
http::header::{DispositionParam, DispositionType},
|
||||
rt,
|
||||
test::TestRequest,
|
||||
FromRequest,
|
||||
};
|
||||
use bytes::Bytes;
|
||||
use futures_util::{future::lazy, StreamExt as _};
|
||||
use tokio::sync::mpsc;
|
||||
use tokio_stream::wrappers::UnboundedReceiverStream;
|
||||
|
||||
use super::*;
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_boundary() {
|
||||
let headers = HeaderMap::new();
|
||||
@ -1024,8 +1036,8 @@ mod tests {
|
||||
assert_eq!(cd.disposition, DispositionType::FormData);
|
||||
assert_eq!(cd.parameters[0], DispositionParam::Name("file".into()));
|
||||
|
||||
assert_eq!(field.content_type().type_(), mime::TEXT);
|
||||
assert_eq!(field.content_type().subtype(), mime::PLAIN);
|
||||
assert_eq!(field.content_type().unwrap().type_(), mime::TEXT);
|
||||
assert_eq!(field.content_type().unwrap().subtype(), mime::PLAIN);
|
||||
|
||||
match field.next().await.unwrap() {
|
||||
Ok(chunk) => assert_eq!(chunk, "test"),
|
||||
@ -1041,8 +1053,8 @@ mod tests {
|
||||
|
||||
match multipart.next().await.unwrap() {
|
||||
Ok(mut field) => {
|
||||
assert_eq!(field.content_type().type_(), mime::TEXT);
|
||||
assert_eq!(field.content_type().subtype(), mime::PLAIN);
|
||||
assert_eq!(field.content_type().unwrap().type_(), mime::TEXT);
|
||||
assert_eq!(field.content_type().unwrap().subtype(), mime::PLAIN);
|
||||
|
||||
match field.next().await {
|
||||
Some(Ok(chunk)) => assert_eq!(chunk, "data"),
|
||||
@ -1086,8 +1098,8 @@ mod tests {
|
||||
assert_eq!(cd.disposition, DispositionType::FormData);
|
||||
assert_eq!(cd.parameters[0], DispositionParam::Name("file".into()));
|
||||
|
||||
assert_eq!(field.content_type().type_(), mime::TEXT);
|
||||
assert_eq!(field.content_type().subtype(), mime::PLAIN);
|
||||
assert_eq!(field.content_type().unwrap().type_(), mime::TEXT);
|
||||
assert_eq!(field.content_type().unwrap().subtype(), mime::PLAIN);
|
||||
|
||||
assert_eq!(get_whole_field(&mut field).await, "test");
|
||||
}
|
||||
@ -1096,8 +1108,8 @@ mod tests {
|
||||
|
||||
match multipart.next().await {
|
||||
Some(Ok(mut field)) => {
|
||||
assert_eq!(field.content_type().type_(), mime::TEXT);
|
||||
assert_eq!(field.content_type().subtype(), mime::PLAIN);
|
||||
assert_eq!(field.content_type().unwrap().type_(), mime::TEXT);
|
||||
assert_eq!(field.content_type().unwrap().subtype(), mime::PLAIN);
|
||||
|
||||
assert_eq!(get_whole_field(&mut field).await, "data");
|
||||
}
|
||||
@ -1112,7 +1124,7 @@ mod tests {
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_basic() {
|
||||
let (_, payload) = Payload::create(false);
|
||||
let (_, payload) = h1::Payload::create(false);
|
||||
let mut payload = PayloadBuffer::new(payload);
|
||||
|
||||
assert_eq!(payload.buf.len(), 0);
|
||||
@ -1122,7 +1134,7 @@ mod tests {
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_eof() {
|
||||
let (mut sender, payload) = Payload::create(false);
|
||||
let (mut sender, payload) = h1::Payload::create(false);
|
||||
let mut payload = PayloadBuffer::new(payload);
|
||||
|
||||
assert_eq!(None, payload.read_max(4).unwrap());
|
||||
@ -1138,7 +1150,7 @@ mod tests {
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_err() {
|
||||
let (mut sender, payload) = Payload::create(false);
|
||||
let (mut sender, payload) = h1::Payload::create(false);
|
||||
let mut payload = PayloadBuffer::new(payload);
|
||||
assert_eq!(None, payload.read_max(1).unwrap());
|
||||
sender.set_error(PayloadError::Incomplete(None));
|
||||
@ -1147,7 +1159,7 @@ mod tests {
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_readmax() {
|
||||
let (mut sender, payload) = Payload::create(false);
|
||||
let (mut sender, payload) = h1::Payload::create(false);
|
||||
let mut payload = PayloadBuffer::new(payload);
|
||||
|
||||
sender.feed_data(Bytes::from("line1"));
|
||||
@ -1164,7 +1176,7 @@ mod tests {
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_readexactly() {
|
||||
let (mut sender, payload) = Payload::create(false);
|
||||
let (mut sender, payload) = h1::Payload::create(false);
|
||||
let mut payload = PayloadBuffer::new(payload);
|
||||
|
||||
assert_eq!(None, payload.read_exact(2));
|
||||
@ -1182,7 +1194,7 @@ mod tests {
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn test_readuntil() {
|
||||
let (mut sender, payload) = Payload::create(false);
|
||||
let (mut sender, payload) = h1::Payload::create(false);
|
||||
let mut payload = PayloadBuffer::new(payload);
|
||||
|
||||
assert_eq!(None, payload.read_until(b"ne").unwrap());
|
||||
@ -1223,7 +1235,7 @@ mod tests {
|
||||
#[actix_rt::test]
|
||||
async fn test_multipart_payload_consumption() {
|
||||
// with sample payload and HttpRequest with no headers
|
||||
let (_, inner_payload) = Payload::create(false);
|
||||
let (_, inner_payload) = h1::Payload::create(false);
|
||||
let mut payload = actix_web::dev::Payload::from(inner_payload);
|
||||
let req = TestRequest::default().to_http_request();
|
||||
|
||||
|
@ -1,11 +1,18 @@
|
||||
# Changes
|
||||
|
||||
## Unreleased - 2021-xx-xx
|
||||
- Minimum supported Rust version (MSRV) is now 1.56 due to transitive `hashbrown` dependency.
|
||||
## Unreleased - 2023-xx-xx
|
||||
|
||||
## 0.5.1 - 2022-09-19
|
||||
|
||||
- Correct typo in error string for `i32` deserialization. [#2876]
|
||||
- Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency.
|
||||
|
||||
[#2876]: https://github.com/actix/actix-web/pull/2876
|
||||
|
||||
## 0.5.0 - 2022-02-22
|
||||
|
||||
### Added
|
||||
|
||||
- Add `Path::as_str`. [#2590]
|
||||
- Add `ResourceDef::set_name`. [#373][net#373]
|
||||
- Add `RouterBuilder::push`. [#2612]
|
||||
@ -17,6 +24,7 @@
|
||||
- Support multi-pattern prefixes and joins. [#2356]
|
||||
|
||||
### Changed
|
||||
|
||||
- Change signature of `ResourceDef::capture_match_info_fn` to remove `user_data` parameter. [#2612]
|
||||
- Deprecate `Path::path`. [#2590]
|
||||
- Disallow prefix routes with tail segments. [#379][net#379]
|
||||
@ -41,6 +49,7 @@
|
||||
- Return type of `ResourceDef::pattern` is now `Option<&str>`. [#373][net#373]
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fix `ResourceDef`'s `PartialEq` implementation. [#373][net#373]
|
||||
- Fix segment interpolation leaving `Path` in unintended state after matching. [#368][net#368]
|
||||
- Improve malformed path error message. [#384][net#384]
|
||||
@ -49,6 +58,7 @@
|
||||
- Static patterns in multi-patterns are no longer interpreted as regex. [#366][net#366]
|
||||
|
||||
### Removed
|
||||
|
||||
- `ResourceDef::name_mut`. [#373][net#373]
|
||||
- Unused `ResourceInfo`. [#2612]
|
||||
|
||||
@ -71,11 +81,11 @@
|
||||
[net#380]: https://github.com/actix/actix-net/pull/380
|
||||
[net#384]: https://github.com/actix/actix-net/pull/384
|
||||
|
||||
|
||||
<details>
|
||||
<summary>0.5.0 Pre-Releases</summary>
|
||||
|
||||
## 0.5.0-rc.3 - 2022-01-31
|
||||
|
||||
- Remove unused `ResourceInfo`. [#2612]
|
||||
- Add `RouterBuilder::push`. [#2612]
|
||||
- Change signature of `ResourceDef::capture_match_info_fn` to remove `user_data` parameter. [#2612]
|
||||
@ -86,33 +96,33 @@
|
||||
[#2612]: https://github.com/actix/actix-web/pull/2612
|
||||
[#2613]: https://github.com/actix/actix-web/pull/2613
|
||||
|
||||
|
||||
## 0.5.0-rc.2 - 2022-01-21
|
||||
|
||||
- Add `Path::as_str`. [#2590]
|
||||
- Deprecate `Path::path`. [#2590]
|
||||
|
||||
[#2590]: https://github.com/actix/actix-web/pull/2590
|
||||
|
||||
|
||||
## 0.5.0-rc.1 - 2022-01-14
|
||||
|
||||
- `Resource` trait now have an associated type, `Path`, instead of the generic parameter. [#2568]
|
||||
- `Resource` is now implemented for `&mut Path<_>` and `RefMut<Path<_>>`. [#2568]
|
||||
|
||||
[#2568]: https://github.com/actix/actix-web/pull/2568
|
||||
|
||||
|
||||
## 0.5.0-beta.4 - 2022-01-04
|
||||
|
||||
- `PathDeserializer` now decodes all percent encoded characters in dynamic segments. [#2566]
|
||||
- Minimum supported Rust version (MSRV) is now 1.54.
|
||||
|
||||
[#2566]: https://github.com/actix/actix-net/pull/2566
|
||||
|
||||
|
||||
## 0.5.0-beta.3 - 2021-12-17
|
||||
|
||||
- Minimum supported Rust version (MSRV) is now 1.52.
|
||||
|
||||
|
||||
## 0.5.0-beta.2 - 2021-09-09
|
||||
|
||||
- Introduce `ResourceDef::join`. [#380][net#380]
|
||||
- Disallow prefix routes with tail segments. [#379][net#379]
|
||||
- Enforce path separators on dynamic prefixes. [#378][net#378]
|
||||
@ -131,8 +141,8 @@
|
||||
[#2355]: https://github.com/actix/actix-web/pull/2355
|
||||
[#2356]: https://github.com/actix/actix-web/pull/2356
|
||||
|
||||
|
||||
## 0.5.0-beta.1 - 2021-07-20
|
||||
|
||||
- Fix a bug in multi-patterns where static patterns are interpreted as regex. [#366][net#366]
|
||||
- Introduce `ResourceDef::pattern_iter` to get an iterator over all patterns in a multi-pattern resource. [#373][net#373]
|
||||
- Fix segment interpolation leaving `Path` in unintended state after matching. [#368][net#368]
|
||||
@ -161,8 +171,8 @@
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## 0.4.0 - 2021-06-06
|
||||
|
||||
- When matching path parameters, `%25` is now kept in the percent-encoded form; no longer decoded to `%`. [#357][net#357]
|
||||
- Path tail patterns now match new lines (`\n`) in request URL. [#360][net#360]
|
||||
- Fixed a safety bug where `Path` could return a malformed string after percent decoding. [#359][net#359]
|
||||
@ -173,70 +183,70 @@
|
||||
[net#359]: https://github.com/actix/actix-net/pull/359
|
||||
[net#360]: https://github.com/actix/actix-net/pull/360
|
||||
|
||||
|
||||
## 0.3.0 - 2019-12-31
|
||||
|
||||
- Version was yanked previously. See https://crates.io/crates/actix-router/0.3.0
|
||||
|
||||
|
||||
## 0.2.7 - 2021-02-06
|
||||
|
||||
- Add `Router::recognize_checked` [#247][net#247]
|
||||
|
||||
[net#247]: https://github.com/actix/actix-net/pull/247
|
||||
|
||||
|
||||
## 0.2.6 - 2021-01-09
|
||||
|
||||
- Use `bytestring` version range compatible with Bytes v1.0. [#246][net#246]
|
||||
|
||||
[net#246]: https://github.com/actix/actix-net/pull/246
|
||||
|
||||
|
||||
## 0.2.5 - 2020-09-20
|
||||
|
||||
- Fix `from_hex()` method
|
||||
|
||||
|
||||
## 0.2.4 - 2019-12-31
|
||||
|
||||
- Add `ResourceDef::resource_path_named()` path generation method
|
||||
|
||||
|
||||
## 0.2.3 - 2019-12-25
|
||||
|
||||
- Add impl `IntoPattern` for `&String`
|
||||
|
||||
|
||||
## 0.2.2 - 2019-12-25
|
||||
|
||||
- Use `IntoPattern` for `RouterBuilder::path()`
|
||||
|
||||
|
||||
## 0.2.1 - 2019-12-25
|
||||
|
||||
- Add `IntoPattern` trait
|
||||
- Add multi-pattern resources
|
||||
|
||||
|
||||
## 0.2.0 - 2019-12-07
|
||||
|
||||
- Update http to 0.2
|
||||
- Update regex to 1.3
|
||||
- Use bytestring instead of string
|
||||
|
||||
|
||||
## 0.1.5 - 2019-05-15
|
||||
|
||||
- Remove debug prints
|
||||
|
||||
|
||||
## 0.1.4 - 2019-05-15
|
||||
|
||||
- Fix checked resource match
|
||||
|
||||
|
||||
## 0.1.3 - 2019-04-22
|
||||
- Added support for `remainder match` (i.e "/path/{tail}*")
|
||||
|
||||
- Added support for `remainder match` (i.e "/path/{tail}\*")
|
||||
|
||||
## 0.1.2 - 2019-04-07
|
||||
|
||||
- Export `Quoter` type
|
||||
- Allow to reset `Path` instance
|
||||
|
||||
|
||||
## 0.1.1 - 2019-04-03
|
||||
|
||||
- Get dynamic segment by name instead of iterator.
|
||||
|
||||
|
||||
## 0.1.0 - 2019-03-09
|
||||
|
||||
- Initial release
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user