1
0
mirror of https://github.com/fafhrd91/actix-web synced 2025-07-04 18:06:23 +02:00

Compare commits

...

118 Commits

Author SHA1 Message Date
b373e1370d prepare files 0.5.0 release 2020-12-26 04:05:45 +00:00
404b5a7709 Add optional support for hidden files/directories (#1811) 2020-12-26 03:36:15 +00:00
ecf08d5156 Remove boxed future from h1 Dispatcher (#1836) 2020-12-24 19:15:17 +00:00
87655b3028 reduce one clone on Arc. (#1850) 2020-12-23 23:58:25 +00:00
3a192400a6 Simplify handler (#1843) 2020-12-23 15:47:07 +00:00
2a7f2c1d59 dispatcher internals testing (#1840) 2020-12-23 01:28:17 +00:00
05f104c240 improve NormalizePath docs (#1839) 2020-12-23 00:19:20 +00:00
4dccd092f3 Bump rand from 0.7.x to 0.8.x (#1845) 2020-12-22 23:45:31 +00:00
95ccf1c9bc replace actix_utils::oneshot with futures_channle::oneshot (#1844) 2020-12-21 16:42:20 +00:00
6cbf27508a simplify ExtractService's return type (#1842) 2020-12-20 02:20:29 +00:00
79de04d862 optimise Extract service (#1841) 2020-12-19 16:33:34 +00:00
a4dbaa8ed1 remove boxed future in DefaultHeaders middleware (#1838) 2020-12-18 23:08:59 +00:00
c7b4c6edfa Disable PR comment from codecov 2020-12-17 21:38:52 +09:00
2a5215c1d6 Remove boxed future from HttpMessage (#1834) 2020-12-17 11:40:49 +00:00
97f615c245 remove boxed futures on Json extract type (#1832) 2020-12-16 23:34:33 +00:00
1a361273e7 optimize bytes and string payload extractors (#1831) 2020-12-16 22:40:26 +00:00
d7ce648445 remove boxed future for Option<T> and Result<T, E> extract type (#1829)
* remove boxed future for Option<T> and Result<T, E> extract type

* use ready macro

* fix fmt
2020-12-16 18:34:10 +00:00
fabc68659b Intradoc links conversion (#1827)
* switching to nightly for intra-doc links

* actix-files intra-doc conversion

* more specific Result

* intradoc conversion complete

* rm blank comments and readme doc link fixes

* macros and broken links
2020-12-13 13:28:39 +00:00
542db82282 Simplify wake up of task (#1826) 2020-12-12 20:07:06 +00:00
ae63eb8bb2 fix clippy warnings (#1806)
* fix clippy warnings

* prevent CI fail status caused by codecov
2020-12-09 11:22:19 +00:00
7a3776b770 remove two unused generics on BoxedRouteFuture types. (#1820) 2020-12-09 10:47:59 +00:00
ff79c33fd4 remove a box (#1814) 2020-12-06 11:42:15 +00:00
b75a9b7a20 add error to message in test helper func (#1812) 2020-12-05 04:57:56 +09:00
d0c6ca7671 test-server => actix-http-test (#1807) 2020-12-02 17:23:30 +00:00
24d525d978 prepare web 3.3.2 release 2020-12-01 22:22:46 +00:00
1f70ef155d Fix match_pattern() returning None for scope with resource of empty path (#1798)
* fix match_pattern function not returning pattern where scope has resource of path ""

* remove print in test

* make comparison on existing else if block

* add fix to changelog
2020-12-01 13:39:41 +00:00
7981e0068a Remove a panic in normalize middleware (#1762)
Co-authored-by: Yuki Okushi <huyuumi.dev@gmail.com>
2020-12-01 10:22:15 +09:00
32d59ca904 Upgrade socket2 dependency (#1803)
Upgrades to a version not making invalid assumptions about
the memory layout of std::net::SocketAddr
2020-12-01 04:18:02 +09:00
ea8bf36104 update web and awc changelogs 2020-11-29 16:35:35 +00:00
0b5b463cfa prepare web and awc releases
closes #1799
2020-11-29 16:33:45 +00:00
fe6ad816cc update dotgraphs 2020-11-25 00:54:00 +00:00
e72b787ba7 prepare actix-web and actix-http-test releases 2020-11-25 00:53:48 +00:00
efc317d3b0 prepare actix-http and awc releases 2020-11-25 00:07:56 +00:00
31057becca prepare actix-files release 0.4.1 2020-11-24 20:33:23 +00:00
f1a9b45437 improve docs for Files::new 2020-11-24 20:23:09 +00:00
5af46775b8 refactor quality and use TryFrom instead of custom trait (#1797) 2020-11-24 11:37:05 +00:00
70f4747a23 add method for getting accept type preference (#1793) 2020-11-24 10:08:57 +00:00
2f11ef089b fix rustdoc uploads 2020-11-24 00:29:13 +00:00
4100c50c70 add either extractor (#1788) 2020-11-20 18:02:41 +00:00
a929209967 actix-files intra-doc migration (#1785) 2020-11-10 23:54:38 +00:00
49e945c88f switching to nightly for intra-doc links (#1783) 2020-11-09 14:01:36 +00:00
9b42333fac Fix typo in Query extractor docs (#1777) 2020-11-06 13:34:42 +00:00
e5b86d189c Fix typo in request_data.rs (#1774) 2020-11-05 17:46:17 +00:00
4bfd5c2781 Upgrade serde_urlencoded to 0.7 (#1773) 2020-11-06 01:36:15 +09:00
9b6a089b36 fix awc doc example (#1772)
* fix awc readme example

Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-11-05 06:20:01 +08:00
ceac97bb8d Update config.yml 2020-11-04 15:08:12 +00:00
61b65aa64a add common 1xx http response builders (#1768)
Co-authored-by: Yuki Okushi <huyuumi.dev@gmail.com>
2020-11-02 18:23:18 +09:00
5468c3c410 Drop content length headers from 101 responses (#1767)
Co-authored-by: Sebastian Mayr <smayr@atlassian.com>
2020-11-02 17:44:14 +09:00
b6385c2b4e Remove CoC on actix-http as duplicated 2020-10-31 12:12:19 +09:00
5135c1e3a0 Update CoC contact information 2020-10-31 12:06:51 +09:00
22b451cf2d fix deps.rs badge 2020-10-31 02:39:54 +00:00
42f51eb962 prepare web release 3.2.0 2020-10-30 03:15:22 +00:00
156c97cef2 prepare awc release 2.0.1 2020-10-30 02:50:53 +00:00
798d744eef prepare http release 2.1.0 2020-10-30 02:19:56 +00:00
4cb833616a deprecate builder if-x methods (#1760) 2020-10-30 02:10:05 +00:00
9963a5ef54 expose on_connect v2 (#1754)
Co-authored-by: Mikail Bagishov <bagishov.mikail@yandex.ru>
2020-10-30 02:03:26 +00:00
4519db36b2 register fns for custom request-derived logging units (#1749)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-10-29 18:38:49 +00:00
7030bf5fe8 Adding app_data to ServiceConfig (#1758)
Co-authored-by: Rob Ede <robjtede@icloud.com>
Co-authored-by: Augusto <augusto@flowciety.de>
2020-10-26 17:02:45 +00:00
20078fe603 Bump pin-project to 1.0 (#1733) 2020-10-25 19:41:44 +09:00
06e5042b94 use idenity encoding on client if no compression features are enabled (#1737)
Co-authored-by: Yuki Okushi <huyuumi.dev@gmail.com>
Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-10-24 21:15:01 +01:00
41e7cec72f Re-export bytes::Buf and bytes::BufMut as well (#1750)
Co-authored-by: Daniel Egger <daniel.egger@axiros.com>
Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-10-24 20:31:23 +01:00
d45a1aa6b6 Add web::ReqData<T> extractor (#1748)
Co-authored-by: Jonas Platte <jonas@lumeo.com>
2020-10-24 18:49:50 +01:00
98243db9f1 Print unconfigured Data<T> type when attempting extraction (#1743)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-10-20 17:35:34 +01:00
f92742bdac Bump base64 to 0.13 (#1744) 2020-10-19 18:24:22 +01:00
e563025b16 always construct shortslice using debug checked new constructor (#1741) 2020-10-19 12:51:30 +01:00
cfd5b381f1 Implement Logger middleware regex exclude pattern (#1723)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-10-19 07:18:16 +01:00
2f84914146 Skip some tests that cause ICE on nightly (#1740) 2020-10-19 11:52:05 +09:00
d765e9099d Fix clippy::rc_buffer (#1728) 2020-10-10 09:26:05 +09:00
34b23f31c9 prepare files release 0.4.0 2020-10-06 22:08:33 +01:00
26c1a901d9 add files preference for utf8 text responses (#1714) 2020-10-06 21:56:28 +01:00
c2c71cc626 Fix/suppress clippy warnings (#1720) 2020-10-01 18:19:09 +09:00
aa11231ee5 prepare web release 3.1.0 (#1716) 2020-09-30 11:07:35 +01:00
b5812b15f0 Remove Sized Bound for web::Data (#1712) 2020-09-29 22:44:12 +01:00
b4e02fe29a Fix cyclic references in ResourceMap (#1708) 2020-09-25 17:42:49 +01:00
37c76a39ab Fix Multipart consuming payload before header checks (#1704)
* Fix Multipart consuming payload before header checks

What
--
Split up logic in the constructor into two functions:

- **from_boundary:** build Multipart from boundary and stream
- **from_error:** build Multipart for MultipartError

Also we make the `boundary`, `from_boundary`, `from_error`  methods public within the crate so that we can use them in the extractor.

The extractor is then able to perform header checks and only consume the
payload if the checks pass.

* Add tests

* Add payload consumption test

Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-09-25 14:50:37 +01:00
60e7e52276 Add TrailingSlash::MergeOnly behavior (#1695)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-09-25 12:50:59 +01:00
c53e9468bc prepare codegen 0.4.0 release (#1702) 2020-09-24 23:54:01 +01:00
162121bf8d Unify route macros (#1705) 2020-09-22 22:42:51 +01:00
f7bcad9567 split up files lib (#1685) 2020-09-20 23:18:25 +01:00
f9e3f78e45 eemove non-relevant comment from actix-http README.md (#1701) 2020-09-20 17:21:53 +01:00
1596893ef7 update actix-http dev-dependencies (#1696)
Co-authored-by: luojinming <luojm@hxsmart.com>
2020-09-19 23:20:34 +09:00
2a2474ca09 Update tinyvec to 1.0 (#1689) 2020-09-17 18:09:42 +01:00
509b2e6eec Provide attribute macro for multiple HTTP methods (#1674)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-09-16 22:37:41 +01:00
d707704556 prepare web release 3.0.2 (#1681) 2020-09-15 13:14:14 +01:00
a429ee6646 Add possibility to set address for test_server (#1645) 2020-09-15 12:09:16 +01:00
7f8073233a fix trimming to inaccessible root path (#1678) 2020-09-15 11:32:31 +01:00
4b4c9d1b93 update migration guide
closes #1680
2020-09-14 22:26:03 +01:00
3fde3be3d8 add trybuild tests to routing codegen (#1677) 2020-09-13 16:31:08 +01:00
f861508789 prepare web release 3.0.1 (#1676) 2020-09-13 03:24:44 +01:00
a4546f02d2 make TrailingSlash enum accessible (#1673)
Co-authored-by: Damian Lesiuk <lesiuk@sabre.com>
2020-09-13 00:55:39 +01:00
64a2c13cdf the big three point oh (#1668) 2020-09-11 13:50:10 +01:00
bf53fe5a22 bump actix dependency to v0.10 (#1666) 2020-09-11 12:09:52 +01:00
cf5138e740 fix clippy async_yields_async lints (#1667) 2020-09-11 11:29:17 +01:00
121075c1ef awc: Rename Client::build to Client::builder (#1665) 2020-09-11 09:24:39 +01:00
22089aff87 Improve json, form and query extractor config docs (#1661) 2020-09-10 15:40:20 +01:00
7787638f26 fix CI clippy warnings (#1664) 2020-09-10 14:46:35 +01:00
2f6e9738c4 prepare multipart and actors releases (#1663) 2020-09-10 12:54:27 +01:00
e39d166a17 Fix examples hyperlink in README (#1660) 2020-09-10 00:12:50 +01:00
059d1671d7 prepare release beta 4 (#1659) 2020-09-09 22:14:11 +01:00
3a27580ebe awc: improve module documentation (#1656)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-09-09 14:24:12 +01:00
9d0534999d bump connect and tls versions (#1655) 2020-09-09 09:20:54 +01:00
c54d73e0bb Improve awc websocket docs (#1654)
Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-09-07 12:04:54 +01:00
9a9d4b182e document all remaining unsafe usages (#1642)
adds some debug assertions where appropriate
2020-09-03 10:00:24 +01:00
4e321595bc extract more config types from Data<T> as well (#1641) 2020-09-02 22:12:07 +01:00
01cbef700f Fix a small typo in a doc comment. (#1649) 2020-08-28 22:16:41 +01:00
8497b5f490 integrate with updated actix-{codec, utils} (#1634) 2020-08-24 10:13:35 +01:00
LJ
75d86a6beb Configurable trailing slash behaviour for NormalizePath (#1639)
Co-authored-by: ljoonal <ljoona@ljoonal.xyz>
2020-08-19 12:21:52 +01:00
3892a95c11 Fix actix-web version to publish 2020-08-18 01:16:18 +09:00
5802eb797f awc,web: Bump up to next beta releases (#1638) 2020-08-18 01:08:40 +09:00
ff2ca0f420 Update rustls to 0.18 (#1637) 2020-08-18 00:28:39 +09:00
59ad1738e9 web: Bump up to 3.0.0-beta.2 (#1636) 2020-08-17 11:32:38 +01:00
aa2bd6fbfb http: Bump up to 2.0.0-beta.3 (#1630) 2020-08-14 19:42:14 +09:00
5aad8e24c7 Re-export all error types from awc (#1621) 2020-08-14 01:24:35 +01:00
6e97bc09f8 Use action to upload docs 2020-08-13 16:04:50 +09:00
160995b8d4 fix awc pool leak (#1626) 2020-08-09 21:49:43 +01:00
187646b2f9 match HttpRequest app_data behavior in ServiceRequest (#1618) 2020-08-09 15:51:38 +01:00
46627be36f add dep graph dot graphs (#1601) 2020-08-09 13:54:35 +01:00
a78380739e require rustls feature for client example (#1625) 2020-08-09 13:32:37 +01:00
172 changed files with 5870 additions and 2879 deletions

View File

@ -1,8 +1,15 @@
blank_issues_enabled: true blank_issues_enabled: true
contact_links: contact_links:
- name: Gitter channel (actix-web) - name: GitHub Discussions
url: https://github.com/actix/actix-web/discussions
about: Actix Web Q&A
- name: Gitter chat (actix-web)
url: https://gitter.im/actix/actix-web url: https://gitter.im/actix/actix-web
about: Please ask and answer questions about the actix-web here. about: Actix Web Q&A
- name: Gitter channel (actix) - name: Gitter chat (actix)
url: https://gitter.im/actix/actix url: https://gitter.im/actix/actix
about: Please ask and answer questions about the actix here. about: Actix (actor framework) Q&A
- name: Actix Discord
url: https://discord.gg/NWpN5mmg3x
about: Actix developer discussion and community chat

View File

@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@master - uses: actions/checkout@v2
- name: Install Rust - name: Install Rust
uses: actions-rs/toolchain@v1 uses: actions-rs/toolchain@v1

View File

@ -21,7 +21,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@master - uses: actions/checkout@v2
- name: Install ${{ matrix.version }} - name: Install ${{ matrix.version }}
uses: actions-rs/toolchain@v1 uses: actions-rs/toolchain@v1

View File

@ -20,7 +20,7 @@ jobs:
runs-on: macOS-latest runs-on: macOS-latest
steps: steps:
- uses: actions/checkout@master - uses: actions/checkout@v2
- name: Install ${{ matrix.version }} - name: Install ${{ matrix.version }}
uses: actions-rs/toolchain@v1 uses: actions-rs/toolchain@v1

View File

@ -11,12 +11,12 @@ jobs:
if: github.repository == 'actix/actix-web' if: github.repository == 'actix/actix-web'
steps: steps:
- uses: actions/checkout@master - uses: actions/checkout@v2
- name: Install Rust - name: Install Rust
uses: actions-rs/toolchain@v1 uses: actions-rs/toolchain@v1
with: with:
toolchain: stable-x86_64-unknown-linux-gnu toolchain: nightly-x86_64-unknown-linux-gnu
profile: minimal profile: minimal
override: true override: true
@ -29,7 +29,9 @@ jobs:
- name: Tweak HTML - name: Tweak HTML
run: echo "<meta http-equiv=refresh content=0;url=os_balloon/index.html>" > target/doc/index.html run: echo "<meta http-equiv=refresh content=0;url=os_balloon/index.html>" > target/doc/index.html
- name: Upload documentation - name: Deploy to GitHub Pages
run: | uses: JamesIves/github-pages-deploy-action@3.7.1
git clone https://github.com/davisp/ghp-import.git with:
./ghp-import/ghp_import.py -n -p -f -m "Documentation upload" -r https://${{ secrets.GITHUB_TOKEN }}@github.com/"${{ github.repository }}.git" target/doc GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BRANCH: gh-pages
FOLDER: target/doc

View File

@ -23,7 +23,7 @@ jobs:
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- uses: actions/checkout@master - uses: actions/checkout@v2
- name: Install ${{ matrix.version }} - name: Install ${{ matrix.version }}
uses: actions-rs/toolchain@v1 uses: actions-rs/toolchain@v1

4
.gitignore vendored
View File

@ -9,6 +9,10 @@ guide/build/
*.pid *.pid
*.sock *.sock
*~ *~
.DS_Store
# These are backup files generated by rustfmt # These are backup files generated by rustfmt
**/*.rs.bk **/*.rs.bk
# Configuration directory generated by CLion
.idea

View File

@ -2,10 +2,128 @@
## Unreleased - 2020-xx-xx ## Unreleased - 2020-xx-xx
### Changed ### Changed
* Bumped `rand` to `0.8`
### Fixed
* added the actual parsing error to `test::read_body_json` [#1812]
[#1812]: https://github.com/actix/actix-web/pull/1812
## 3.3.2 - 2020-12-01
### Fixed
* Removed an occasional `unwrap` on `None` panic in `NormalizePathNormalization`. [#1762]
* Fix `match_pattern()` returning `None` for scope with empty path resource. [#1798]
* Increase minimum `socket2` version. [#1803]
[#1762]: https://github.com/actix/actix-web/pull/1762
[#1798]: https://github.com/actix/actix-web/pull/1798
[#1803]: https://github.com/actix/actix-web/pull/1803
## 3.3.1 - 2020-11-29
* Ensure `actix-http` dependency uses same `serde_urlencoded`.
## 3.3.0 - 2020-11-25
### Added
* Add `Either<A, B>` extractor helper. [#1788]
### Changed
* Upgrade `serde_urlencoded` to `0.7`. [#1773]
[#1773]: https://github.com/actix/actix-web/pull/1773
[#1788]: https://github.com/actix/actix-web/pull/1788
## 3.2.0 - 2020-10-30
### Added
* Implement `exclude_regex` for Logger middleware. [#1723]
* Add request-local data extractor `web::ReqData`. [#1748]
* Add ability to register closure for request middleware logging. [#1749]
* Add `app_data` to `ServiceConfig`. [#1757]
* Expose `on_connect` for access to the connection stream before request is handled. [#1754]
### Changed
* Updated actix-web-codegen dependency for access to new `#[route(...)]` multi-method macro.
* Print non-configured `Data<T>` type when attempting extraction. [#1743]
* Re-export bytes::Buf{Mut} in web module. [#1750]
* Upgrade `pin-project` to `1.0`.
[#1723]: https://github.com/actix/actix-web/pull/1723
[#1743]: https://github.com/actix/actix-web/pull/1743
[#1748]: https://github.com/actix/actix-web/pull/1748
[#1750]: https://github.com/actix/actix-web/pull/1750
[#1754]: https://github.com/actix/actix-web/pull/1754
[#1749]: https://github.com/actix/actix-web/pull/1749
## 3.1.0 - 2020-09-29
### Changed
* Add `TrailingSlash::MergeOnly` behaviour to `NormalizePath`, which allows `NormalizePath`
to retain any trailing slashes. [#1695]
* Remove bound `std::marker::Sized` from `web::Data` to support storing `Arc<dyn Trait>`
via `web::Data::from` [#1710]
### Fixed
* `ResourceMap` debug printing is no longer infinitely recursive. [#1708]
[#1695]: https://github.com/actix/actix-web/pull/1695
[#1708]: https://github.com/actix/actix-web/pull/1708
[#1710]: https://github.com/actix/actix-web/pull/1710
## 3.0.2 - 2020-09-15
### Fixed
* `NormalizePath` when used with `TrailingSlash::Trim` no longer trims the root path "/". [#1678]
[#1678]: https://github.com/actix/actix-web/pull/1678
## 3.0.1 - 2020-09-13
### Changed
* `middleware::normalize::TrailingSlash` enum is now accessible. [#1673]
[#1673]: https://github.com/actix/actix-web/pull/1673
## 3.0.0 - 2020-09-11
* No significant changes from `3.0.0-beta.4`.
## 3.0.0-beta.4 - 2020-09-09
### Added
* `middleware::NormalizePath` now has configurable behaviour for either always having a trailing
slash, or as the new addition, always trimming trailing slashes. [#1639]
### Changed
* Update actix-codec and actix-utils dependencies. [#1634]
* `FormConfig` and `JsonConfig` configurations are now also considered when set
using `App::data`. [#1641]
* `HttpServer::maxconn` is renamed to the more expressive `HttpServer::max_connections`. [#1655]
* `HttpServer::maxconnrate` is renamed to the more expressive
`HttpServer::max_connection_rate`. [#1655]
[#1639]: https://github.com/actix/actix-web/pull/1639
[#1641]: https://github.com/actix/actix-web/pull/1641
[#1634]: https://github.com/actix/actix-web/pull/1634
[#1655]: https://github.com/actix/actix-web/pull/1655
## 3.0.0-beta.3 - 2020-08-17
### Changed
* Update `rustls` to 0.18
## 3.0.0-beta.2 - 2020-08-17
### Changed
* `PayloadConfig` is now also considered in `Bytes` and `String` extractors when set * `PayloadConfig` is now also considered in `Bytes` and `String` extractors when set
using `App::data`. [#1610] using `App::data`. [#1610]
* `web::Path` now has a public representation: `web::Path(pub T)` that enables * `web::Path` now has a public representation: `web::Path(pub T)` that enables
destructuring. [#1594] destructuring. [#1594]
* `ServiceRequest::app_data` allows retrieval of non-Data data without splitting into parts to
access `HttpRequest` which already allows this. [#1618]
* Re-export all error types from `awc`. [#1621]
* MSRV is now 1.42.0. * MSRV is now 1.42.0.
### Fixed ### Fixed
@ -14,6 +132,8 @@
[#1594]: https://github.com/actix/actix-web/pull/1594 [#1594]: https://github.com/actix/actix-web/pull/1594
[#1609]: https://github.com/actix/actix-web/pull/1609 [#1609]: https://github.com/actix/actix-web/pull/1609
[#1610]: https://github.com/actix/actix-web/pull/1610 [#1610]: https://github.com/actix/actix-web/pull/1610
[#1618]: https://github.com/actix/actix-web/pull/1618
[#1621]: https://github.com/actix/actix-web/pull/1621
## 3.0.0-beta.1 - 2020-07-13 ## 3.0.0-beta.1 - 2020-07-13
@ -122,7 +242,7 @@
### Deleted ### Deleted
* Delete HttpServer::run(), it is not useful witht async/await * Delete HttpServer::run(), it is not useful with async/await
## [2.0.0-alpha.3] - 2019-12-07 ## [2.0.0-alpha.3] - 2019-12-07
@ -167,7 +287,7 @@
### Changed ### Changed
* Make UrlEncodedError::Overflow more informativve * Make UrlEncodedError::Overflow more informative
* Use actix-testing for testing utils * Use actix-testing for testing utils
@ -185,7 +305,7 @@
* Re-implement Host predicate (#989) * Re-implement Host predicate (#989)
* Form immplements Responder, returning a `application/x-www-form-urlencoded` response * Form implements Responder, returning a `application/x-www-form-urlencoded` response
* Add `into_inner` to `Data` * Add `into_inner` to `Data`

View File

@ -34,10 +34,13 @@ This Code of Conduct applies both within project spaces and in public spaces whe
## Enforcement ## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at fafhrd91@gmail.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at robjtede@icloud.com ([@robjtede]) or huyuumi@neet.club ([@JohnTitor]). The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
[@robjtede]: https://github.com/robjtede
[@JohnTitor]: https://github.com/JohnTitor
## Attribution ## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]

View File

@ -1,8 +1,8 @@
[package] [package]
name = "actix-web" name = "actix-web"
version = "3.0.0-beta.1" version = "3.3.2"
authors = ["Nikolay Kim <fafhrd91@gmail.com>"] authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
description = "Actix web is a simple, pragmatic and extremely fast web framework for Rust." description = "Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust"
readme = "README.md" readme = "README.md"
keywords = ["actix", "http", "web", "framework", "async"] keywords = ["actix", "http", "web", "framework", "async"]
homepage = "https://actix.rs" homepage = "https://actix.rs"
@ -34,7 +34,7 @@ members = [
"actix-multipart", "actix-multipart",
"actix-web-actors", "actix-web-actors",
"actix-web-codegen", "actix-web-codegen",
"test-server", "actix-http-test",
] ]
[features] [features]
@ -64,24 +64,32 @@ required-features = ["compress"]
name = "test_server" name = "test_server"
required-features = ["compress"] required-features = ["compress"]
[[example]]
name = "on_connect"
required-features = []
[[example]]
name = "client"
required-features = ["rustls"]
[dependencies] [dependencies]
actix-codec = "0.2.0" actix-codec = "0.3.0"
actix-service = "1.0.2" actix-service = "1.0.6"
actix-utils = "1.0.6" actix-utils = "2.0.0"
actix-router = "0.2.4" actix-router = "0.2.4"
actix-rt = "1.1.1" actix-rt = "1.1.1"
actix-server = "1.0.0" actix-server = "1.0.0"
actix-testing = "1.0.0" actix-testing = "1.0.0"
actix-macros = "0.1.0" actix-macros = "0.1.0"
actix-threadpool = "0.3.1" actix-threadpool = "0.3.1"
actix-tls = "2.0.0-alpha.1" actix-tls = "2.0.0"
actix-web-codegen = "0.3.0-beta.1" actix-web-codegen = "0.4.0"
actix-http = "2.0.0-alpha.4" actix-http = "2.2.0"
awc = { version = "2.0.0-beta.1", default-features = false } awc = { version = "2.0.3", default-features = false }
bytes = "0.5.3" bytes = "0.5.3"
derive_more = "0.99.2" derive_more = "0.99.5"
encoding_rs = "0.8" encoding_rs = "0.8"
futures-channel = { version = "0.3.5", default-features = false } futures-channel = { version = "0.3.5", default-features = false }
futures-core = { version = "0.3.5", default-features = false } futures-core = { version = "0.3.5", default-features = false }
@ -89,22 +97,23 @@ futures-util = { version = "0.3.5", default-features = false }
fxhash = "0.2.1" fxhash = "0.2.1"
log = "0.4" log = "0.4"
mime = "0.3" mime = "0.3"
socket2 = "0.3" socket2 = "0.3.16"
pin-project = "0.4.17" pin-project = "1.0.0"
regex = "1.3" regex = "1.4"
serde = { version = "1.0", features=["derive"] } serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
serde_urlencoded = "0.6.1" serde_urlencoded = "0.7"
time = { version = "0.2.7", default-features = false, features = ["std"] } time = { version = "0.2.7", default-features = false, features = ["std"] }
url = "2.1" url = "2.1"
open-ssl = { package = "openssl", version = "0.10", optional = true } open-ssl = { package = "openssl", version = "0.10", optional = true }
rust-tls = { package = "rustls", version = "0.17.0", optional = true } rust-tls = { package = "rustls", version = "0.18.0", optional = true }
tinyvec = { version = "0.3", features = ["alloc"] } tinyvec = { version = "1", features = ["alloc"] }
[dev-dependencies] [dev-dependencies]
actix = "0.10.0-alpha.1" actix = "0.10.0"
rand = "0.7" actix-http = { version = "2.1.0", features = ["actors"] }
env_logger = "0.7" rand = "0.8"
env_logger = "0.8"
serde_derive = "1.0" serde_derive = "1.0"
brotli2 = "0.3.2" brotli2 = "0.3.2"
flate2 = "1.0.13" flate2 = "1.0.13"
@ -118,10 +127,10 @@ codegen-units = 1
[patch.crates-io] [patch.crates-io]
actix-web = { path = "." } actix-web = { path = "." }
actix-http = { path = "actix-http" } actix-http = { path = "actix-http" }
actix-http-test = { path = "test-server" } actix-http-test = { path = "actix-http-test" }
actix-web-codegen = { path = "actix-web-codegen" } actix-web-codegen = { path = "actix-web-codegen" }
actix-files = { path = "actix-files" }
actix-multipart = { path = "actix-multipart" } actix-multipart = { path = "actix-multipart" }
actix-files = { path = "actix-files" }
awc = { path = "awc" } awc = { path = "awc" }
[[bench]] [[bench]]

View File

@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier same "printed page" as the copyright notice for easier
identification within third-party archives. identification within third-party archives.
Copyright 2017-NOW Nikolay Kim Copyright 2017-NOW Actix Team
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
Copyright (c) 2017 Nikolay Kim Copyright (c) 2017 Actix Team
Permission is hereby granted, free of charge, to any Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated person obtaining a copy of this software and associated

View File

@ -1,11 +1,25 @@
## Unreleased ## Unreleased
## 3.0.0
* The return type for `ServiceRequest::app_data::<T>()` was changed from returning a `Data<T>` to
simply a `T`. To access a `Data<T>` use `ServiceRequest::app_data::<Data<T>>()`.
* Cookie handling has been offloaded to the `cookie` crate:
* `USERINFO_ENCODE_SET` is no longer exposed. Percent-encoding is still supported; check docs.
* Some types now require lifetime parameters.
* The time crate was updated to `v0.2`, a major breaking change to the time crate, which affects
any `actix-web` method previously expecting a time v0.1 input.
* Setting a cookie's SameSite property, explicitly, to `SameSite::None` will now * Setting a cookie's SameSite property, explicitly, to `SameSite::None` will now
result in `SameSite=None` being sent with the response Set-Cookie header. result in `SameSite=None` being sent with the response Set-Cookie header.
To create a cookie without a SameSite attribute, remove any calls setting same_site. To create a cookie without a SameSite attribute, remove any calls setting same_site.
* actix-http support for Actors messages was moved to actix-http crate and is enabled * actix-http support for Actors messages was moved to actix-http crate and is enabled
with feature `actors` with feature `actors`
* content_length function is removed from actix-http. * content_length function is removed from actix-http.
You can set Content-Length by normally setting the response body or calling no_chunking function. You can set Content-Length by normally setting the response body or calling no_chunking function.
@ -32,6 +46,15 @@
} }
``` ```
* `middleware::NormalizePath` can now also be configured to trim trailing slashes instead of always keeping one.
It will need `middleware::normalize::TrailingSlash` when being constructed with `NormalizePath::new(...)`,
or for an easier migration you can replace `wrap(middleware::NormalizePath)` with `wrap(middleware::NormalizePath::new(TrailingSlash::MergeOnly))`.
* `HttpServer::maxconn` is renamed to the more expressive `HttpServer::max_connections`.
* `HttpServer::maxconnrate` is renamed to the more expressive `HttpServer::max_connection_rate`.
## 2.0.0 ## 2.0.0
* `HttpServer::start()` renamed to `HttpServer::run()`. It also possible to * `HttpServer::start()` renamed to `HttpServer::run()`. It also possible to

View File

@ -1,19 +1,21 @@
<div align="center"> <div align="center">
<h1>Actix web</h1> <h1>Actix web</h1>
<p> <p>
<strong>Actix web is a powerful, pragmatic, and extremely fast web framework for Rust</strong> <strong>Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust</strong>
</p> </p>
<p> <p>
[![crates.io](https://meritbadge.herokuapp.com/actix-web)](https://crates.io/crates/actix-web) [![crates.io](https://img.shields.io/crates/v/actix-web?label=latest)](https://crates.io/crates/actix-web)
[![Documentation](https://docs.rs/actix-web/badge.svg)](https://docs.rs/actix-web) [![Documentation](https://docs.rs/actix-web/badge.svg?version=3.3.2)](https://docs.rs/actix-web/3.3.2)
[![Version](https://img.shields.io/badge/rustc-1.42+-ab6000.svg)](https://blog.rust-lang.org/2020/03/12/Rust-1.42.html) [![Version](https://img.shields.io/badge/rustc-1.42+-ab6000.svg)](https://blog.rust-lang.org/2020/03/12/Rust-1.42.html)
![License](https://img.shields.io/crates/l/actix-web.svg) ![License](https://img.shields.io/crates/l/actix-web.svg)
[![Dependency Status](https://deps.rs/crate/actix-web/3.3.2/status.svg)](https://deps.rs/crate/actix-web/3.3.2)
<br /> <br />
[![Build Status](https://travis-ci.org/actix/actix-web.svg?branch=master)](https://travis-ci.org/actix/actix-web) [![Build Status](https://travis-ci.org/actix/actix-web.svg?branch=master)](https://travis-ci.org/actix/actix-web)
[![codecov](https://codecov.io/gh/actix/actix-web/branch/master/graph/badge.svg)](https://codecov.io/gh/actix/actix-web) [![codecov](https://codecov.io/gh/actix/actix-web/branch/master/graph/badge.svg)](https://codecov.io/gh/actix/actix-web)
[![Download](https://img.shields.io/crates/d/actix-web.svg)](https://crates.io/crates/actix-web) [![Download](https://img.shields.io/crates/d/actix-web.svg)](https://crates.io/crates/actix-web)
[![Join the chat at https://gitter.im/actix/actix](https://badges.gitter.im/actix/actix.svg)](https://gitter.im/actix/actix?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Join the chat at https://gitter.im/actix/actix](https://badges.gitter.im/actix/actix.svg)](https://gitter.im/actix/actix?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x)
</p> </p>
</div> </div>
@ -37,17 +39,12 @@
## Documentation ## Documentation
* [Website & User Guide](https://actix.rs) * [Website & User Guide](https://actix.rs)
* [Examples Repository](https://actix.rs/actix-web/actix_web) * [Examples Repository](https://github.com/actix/examples)
* [API Documentation](https://docs.rs/actix-web) * [API Documentation](https://docs.rs/actix-web)
* [API Documentation (master branch)](https://actix.rs/actix-web/actix_web) * [API Documentation (master branch)](https://actix.rs/actix-web/actix_web)
## Example ## Example
<h2>
WARNING: This example is for the master branch which is currently in beta stages for v3. For
Actix web v2 see the <a href="https://actix.rs/docs/getting-started/">getting started guide</a>.
</h2>
Dependencies: Dependencies:
```toml ```toml

View File

@ -1,11 +0,0 @@
# Cors Middleware for actix web framework [![Build Status](https://travis-ci.org/actix/actix-web.svg?branch=master)](https://travis-ci.org/actix/actix-web) [![codecov](https://codecov.io/gh/actix/actix-web/branch/master/graph/badge.svg)](https://codecov.io/gh/actix/actix-web) [![crates.io](https://meritbadge.herokuapp.com/actix-cors)](https://crates.io/crates/actix-cors) [![Join the chat at https://gitter.im/actix/actix](https://badges.gitter.im/actix/actix.svg)](https://gitter.im/actix/actix?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
**This crate moved to https://github.com/actix/actix-extras.**
## Documentation & community resources
* [User Guide](https://actix.rs/docs/)
* [API Documentation](https://docs.rs/actix-cors/)
* [Chat on gitter](https://gitter.im/actix/actix)
* Cargo package: [actix-cors](https://crates.io/crates/actix-cors)
* Minimum supported Rust version: 1.34 or later

View File

@ -1,12 +1,34 @@
# Changes # Changes
## [Unreleased] - 2020-xx-xx ## Unreleased - 2020-xx-xx
## [0.3.0-beta.1] - 2020-07-15
## 0.5.0 - 2020-12-26
* Optionally support hidden files/directories. [#1811]
[#1811]: https://github.com/actix/actix-web/pull/1811
## 0.4.1 - 2020-11-24
* Clarify order of parameters in `Files::new` and improve docs.
## 0.4.0 - 2020-10-06
* Add `Files::prefer_utf8` option that adds UTF-8 charset on certain response types. [#1714]
[#1714]: https://github.com/actix/actix-web/pull/1714
## 0.3.0 - 2020-09-11
* No significant changes from 0.3.0-beta.1.
## 0.3.0-beta.1 - 2020-07-15
* Update `v_htmlescape` to 0.10 * Update `v_htmlescape` to 0.10
* Update `actix-web` and `actix-http` dependencies to beta.1 * Update `actix-web` and `actix-http` dependencies to beta.1
## [0.3.0-alpha.1] - 2020-05-23
## 0.3.0-alpha.1 - 2020-05-23
* Update `actix-web` and `actix-http` dependencies to alpha * Update `actix-web` and `actix-http` dependencies to alpha
* Fix some typos in the docs * Fix some typos in the docs
* Bump minimum supported Rust version to 1.40 * Bump minimum supported Rust version to 1.40
@ -14,77 +36,73 @@
[#1384]: https://github.com/actix/actix-web/pull/1384 [#1384]: https://github.com/actix/actix-web/pull/1384
## [0.2.1] - 2019-12-22
## 0.2.1 - 2019-12-22
* Use the same format for file URLs regardless of platforms * Use the same format for file URLs regardless of platforms
## [0.2.0] - 2019-12-20
## 0.2.0 - 2019-12-20
* Fix BodyEncoding trait import #1220 * Fix BodyEncoding trait import #1220
## [0.2.0-alpha.1] - 2019-12-07
## 0.2.0-alpha.1 - 2019-12-07
* Migrate to `std::future` * Migrate to `std::future`
## [0.1.7] - 2019-11-06
* Add an additional `filename*` param in the `Content-Disposition` header of `actix_files::NamedFile` to be more compatible. (#1151) ## 0.1.7 - 2019-11-06
* Add an additional `filename*` param in the `Content-Disposition` header of
## [0.1.6] - 2019-10-14 `actix_files::NamedFile` to be more compatible. (#1151)
## 0.1.6 - 2019-10-14
* Add option to redirect to a slash-ended path `Files` #1132 * Add option to redirect to a slash-ended path `Files` #1132
## [0.1.5] - 2019-10-08
## 0.1.5 - 2019-10-08
* Bump up `mime_guess` crate version to 2.0.1 * Bump up `mime_guess` crate version to 2.0.1
* Bump up `percent-encoding` crate version to 2.1 * Bump up `percent-encoding` crate version to 2.1
* Allow user defined request guards for `Files` #1113 * Allow user defined request guards for `Files` #1113
## [0.1.4] - 2019-07-20
## 0.1.4 - 2019-07-20
* Allow to disable `Content-Disposition` header #686 * Allow to disable `Content-Disposition` header #686
## [0.1.3] - 2019-06-28
## 0.1.3 - 2019-06-28
* Do not set `Content-Length` header, let actix-http set it #930 * Do not set `Content-Length` header, let actix-http set it #930
## [0.1.2] - 2019-06-13
## 0.1.2 - 2019-06-13
* Content-Length is 0 for NamedFile HEAD request #914 * Content-Length is 0 for NamedFile HEAD request #914
* Fix ring dependency from actix-web default features for #741 * Fix ring dependency from actix-web default features for #741
## [0.1.1] - 2019-06-01
## 0.1.1 - 2019-06-01
* Static files are incorrectly served as both chunked and with length #812 * Static files are incorrectly served as both chunked and with length #812
## [0.1.0] - 2019-05-25
* NamedFile last-modified check always fails due to nano-seconds ## 0.1.0 - 2019-05-25
in file modified date #820 * NamedFile last-modified check always fails due to nano-seconds in file modified date #820
## [0.1.0-beta.4] - 2019-05-12
## 0.1.0-beta.4 - 2019-05-12
* Update actix-web to beta.4 * Update actix-web to beta.4
## [0.1.0-beta.1] - 2019-04-20
## 0.1.0-beta.1 - 2019-04-20
* Update actix-web to beta.1 * Update actix-web to beta.1
## [0.1.0-alpha.6] - 2019-04-14
## 0.1.0-alpha.6 - 2019-04-14
* Update actix-web to alpha6 * Update actix-web to alpha6
## [0.1.0-alpha.4] - 2019-04-08
## 0.1.0-alpha.4 - 2019-04-08
* Update actix-web to alpha4 * Update actix-web to alpha4
## [0.1.0-alpha.2] - 2019-04-02
## 0.1.0-alpha.2 - 2019-04-02
* Add default handler support * Add default handler support
## [0.1.0-alpha.1] - 2019-03-28
## 0.1.0-alpha.1 - 2019-03-28
* Initial impl * Initial impl

View File

@ -1,8 +1,8 @@
[package] [package]
name = "actix-files" name = "actix-files"
version = "0.3.0-beta.1" version = "0.5.0"
authors = ["Nikolay Kim <fafhrd91@gmail.com>"] authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
description = "Static files support for actix web." description = "Static file serving for Actix Web"
readme = "README.md" readme = "README.md"
keywords = ["actix", "http", "async", "futures"] keywords = ["actix", "http", "async", "futures"]
homepage = "https://actix.rs" homepage = "https://actix.rs"
@ -17,20 +17,19 @@ name = "actix_files"
path = "src/lib.rs" path = "src/lib.rs"
[dependencies] [dependencies]
actix-web = { version = "3.0.0-beta.1", default-features = false } actix-web = { version = "3.0.0", default-features = false }
actix-http = "2.0.0-beta.1" actix-service = "1.0.6"
actix-service = "1.0.1"
bitflags = "1" bitflags = "1"
bytes = "0.5.3" bytes = "0.5.3"
futures-core = { version = "0.3.5", default-features = false } futures-core = { version = "0.3.7", default-features = false }
futures-util = { version = "0.3.5", default-features = false } futures-util = { version = "0.3.7", default-features = false }
derive_more = "0.99.2" derive_more = "0.99.5"
log = "0.4" log = "0.4"
mime = "0.3" mime = "0.3"
mime_guess = "2.0.1" mime_guess = "2.0.1"
percent-encoding = "2.1" percent-encoding = "2.1"
v_htmlescape = "0.10" v_htmlescape = "0.12"
[dev-dependencies] [dev-dependencies]
actix-rt = "1.0.0" actix-rt = "1.0.0"
actix-web = { version = "3.0.0-beta.1", features = ["openssl"] } actix-web = "3.0.0"

View File

@ -1,9 +1,19 @@
# Static files support for actix web [![Build Status](https://travis-ci.org/actix/actix-web.svg?branch=master)](https://travis-ci.org/actix/actix-web) [![codecov](https://codecov.io/gh/actix/actix-web/branch/master/graph/badge.svg)](https://codecov.io/gh/actix/actix-web) [![crates.io](https://meritbadge.herokuapp.com/actix-files)](https://crates.io/crates/actix-files) [![Join the chat at https://gitter.im/actix/actix](https://badges.gitter.im/actix/actix.svg)](https://gitter.im/actix/actix?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # actix-files
## Documentation & community resources > Static file serving for Actix Web
* [User Guide](https://actix.rs/docs/) [![crates.io](https://img.shields.io/crates/v/actix-files?label=latest)](https://crates.io/crates/actix-files)
* [API Documentation](https://docs.rs/actix-files/) [![Documentation](https://docs.rs/actix-files/badge.svg?version=0.5.0)](https://docs.rs/actix-files/0.5.0)
* [Chat on gitter](https://gitter.im/actix/actix) [![Version](https://img.shields.io/badge/rustc-1.42+-ab6000.svg)](https://blog.rust-lang.org/2020/03/12/Rust-1.42.html)
* Cargo package: [actix-files](https://crates.io/crates/actix-files) ![License](https://img.shields.io/crates/l/actix-files.svg)
* Minimum supported Rust version: 1.40 or later <br />
[![dependency status](https://deps.rs/crate/actix-files/0.5.0/status.svg)](https://deps.rs/crate/actix-files/0.5.0)
[![Download](https://img.shields.io/crates/d/actix-files.svg)](https://crates.io/crates/actix-files)
[![Join the chat at https://gitter.im/actix/actix](https://badges.gitter.im/actix/actix.svg)](https://gitter.im/actix/actix?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
## Documentation & Resources
- [API Documentation](https://docs.rs/actix-files/)
- [Example Project](https://github.com/actix/examples/tree/master/static_index)
- [Chat on Gitter](https://gitter.im/actix/actix-web)
- Minimum supported Rust version: 1.42 or later

View File

@ -0,0 +1,94 @@
use std::{
cmp, fmt,
fs::File,
future::Future,
io::{self, Read, Seek},
pin::Pin,
task::{Context, Poll},
};
use actix_web::{
error::{BlockingError, Error},
web,
};
use bytes::Bytes;
use futures_core::{ready, Stream};
use futures_util::future::{FutureExt, LocalBoxFuture};
use crate::handle_error;
type ChunkedBoxFuture =
LocalBoxFuture<'static, Result<(File, Bytes), BlockingError<io::Error>>>;
#[doc(hidden)]
/// A helper created from a `std::fs::File` which reads the file
/// chunk-by-chunk on a `ThreadPool`.
pub struct ChunkedReadFile {
pub(crate) size: u64,
pub(crate) offset: u64,
pub(crate) file: Option<File>,
pub(crate) fut: Option<ChunkedBoxFuture>,
pub(crate) counter: u64,
}
impl fmt::Debug for ChunkedReadFile {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_str("ChunkedReadFile")
}
}
impl Stream for ChunkedReadFile {
type Item = Result<Bytes, Error>;
fn poll_next(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Self::Item>> {
if let Some(ref mut fut) = self.fut {
return match ready!(Pin::new(fut).poll(cx)) {
Ok((file, bytes)) => {
self.fut.take();
self.file = Some(file);
self.offset += bytes.len() as u64;
self.counter += bytes.len() as u64;
Poll::Ready(Some(Ok(bytes)))
}
Err(e) => Poll::Ready(Some(Err(handle_error(e)))),
};
}
let size = self.size;
let offset = self.offset;
let counter = self.counter;
if size == counter {
Poll::Ready(None)
} else {
let mut file = self.file.take().expect("Use after completion");
self.fut = Some(
web::block(move || {
let max_bytes =
cmp::min(size.saturating_sub(counter), 65_536) as usize;
let mut buf = Vec::with_capacity(max_bytes);
file.seek(io::SeekFrom::Start(offset))?;
let n_bytes =
file.by_ref().take(max_bytes as u64).read_to_end(&mut buf)?;
if n_bytes == 0 {
return Err(io::ErrorKind::UnexpectedEof.into());
}
Ok((file, Bytes::from(buf)))
})
.boxed_local(),
);
self.poll_next(cx)
}
}
}

View File

@ -0,0 +1,114 @@
use std::{fmt::Write, fs::DirEntry, io, path::Path, path::PathBuf};
use actix_web::{dev::ServiceResponse, HttpRequest, HttpResponse};
use percent_encoding::{utf8_percent_encode, CONTROLS};
use v_htmlescape::escape as escape_html_entity;
/// A directory; responds with the generated directory listing.
#[derive(Debug)]
pub struct Directory {
/// Base directory.
pub base: PathBuf,
/// Path of subdirectory to generate listing for.
pub path: PathBuf,
}
impl Directory {
/// Create a new directory
pub fn new(base: PathBuf, path: PathBuf) -> Directory {
Directory { base, path }
}
/// Is this entry visible from this directory?
pub fn is_visible(&self, entry: &io::Result<DirEntry>) -> bool {
if let Ok(ref entry) = *entry {
if let Some(name) = entry.file_name().to_str() {
if name.starts_with('.') {
return false;
}
}
if let Ok(ref md) = entry.metadata() {
let ft = md.file_type();
return ft.is_dir() || ft.is_file() || ft.is_symlink();
}
}
false
}
}
pub(crate) type DirectoryRenderer =
dyn Fn(&Directory, &HttpRequest) -> Result<ServiceResponse, io::Error>;
// show file url as relative to static path
macro_rules! encode_file_url {
($path:ident) => {
utf8_percent_encode(&$path, CONTROLS)
};
}
// " -- &quot; & -- &amp; ' -- &#x27; < -- &lt; > -- &gt; / -- &#x2f;
macro_rules! encode_file_name {
($entry:ident) => {
escape_html_entity(&$entry.file_name().to_string_lossy())
};
}
pub(crate) fn directory_listing(
dir: &Directory,
req: &HttpRequest,
) -> Result<ServiceResponse, io::Error> {
let index_of = format!("Index of {}", req.path());
let mut body = String::new();
let base = Path::new(req.path());
for entry in dir.path.read_dir()? {
if dir.is_visible(&entry) {
let entry = entry.unwrap();
let p = match entry.path().strip_prefix(&dir.path) {
Ok(p) if cfg!(windows) => {
base.join(p).to_string_lossy().replace("\\", "/")
}
Ok(p) => base.join(p).to_string_lossy().into_owned(),
Err(_) => continue,
};
// if file is a directory, add '/' to the end of the name
if let Ok(metadata) = entry.metadata() {
if metadata.is_dir() {
let _ = write!(
body,
"<li><a href=\"{}\">{}/</a></li>",
encode_file_url!(p),
encode_file_name!(entry),
);
} else {
let _ = write!(
body,
"<li><a href=\"{}\">{}</a></li>",
encode_file_url!(p),
encode_file_name!(entry),
);
}
} else {
continue;
}
}
}
let html = format!(
"<html>\
<head><title>{}</title></head>\
<body><h1>{}</h1>\
<ul>\
{}\
</ul></body>\n</html>",
index_of, index_of, body
);
Ok(ServiceResponse::new(
req.clone(),
HttpResponse::Ok()
.content_type("text/html; charset=utf-8")
.body(html),
))
}

View File

@ -0,0 +1,52 @@
use mime::Mime;
/// Transforms MIME `text/*` types into their UTF-8 equivalent, if supported.
///
/// MIME types that are converted
/// - application/javascript
/// - text/html
/// - text/css
/// - text/plain
/// - text/csv
/// - text/tab-separated-values
pub(crate) fn equiv_utf8_text(ct: Mime) -> Mime {
// use (roughly) order of file-type popularity for a web server
if ct == mime::APPLICATION_JAVASCRIPT {
return mime::APPLICATION_JAVASCRIPT_UTF_8;
}
if ct == mime::TEXT_HTML {
return mime::TEXT_HTML_UTF_8;
}
if ct == mime::TEXT_CSS {
return mime::TEXT_CSS_UTF_8;
}
if ct == mime::TEXT_PLAIN {
return mime::TEXT_PLAIN_UTF_8;
}
if ct == mime::TEXT_CSV {
return mime::TEXT_CSV_UTF_8;
}
if ct == mime::TEXT_TAB_SEPARATED_VALUES {
return mime::TEXT_TAB_SEPARATED_VALUES_UTF_8;
}
ct
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_equiv_utf8_text() {
assert_eq!(equiv_utf8_text(mime::TEXT_PLAIN), mime::TEXT_PLAIN_UTF_8);
assert_eq!(equiv_utf8_text(mime::TEXT_XML), mime::TEXT_XML);
assert_eq!(equiv_utf8_text(mime::IMAGE_PNG), mime::IMAGE_PNG);
}
}

282
actix-files/src/files.rs Normal file
View File

@ -0,0 +1,282 @@
use std::{cell::RefCell, fmt, io, path::PathBuf, rc::Rc};
use actix_service::{boxed, IntoServiceFactory, ServiceFactory};
use actix_web::{
dev::{
AppService, HttpServiceFactory, ResourceDef, ServiceRequest, ServiceResponse,
},
error::Error,
guard::Guard,
http::header::DispositionType,
HttpRequest,
};
use futures_util::future::{ok, FutureExt, LocalBoxFuture};
use crate::{
directory_listing, named, Directory, DirectoryRenderer, FilesService,
HttpNewService, MimeOverride,
};
/// Static files handling service.
///
/// `Files` service must be registered with `App::service()` method.
///
/// ```rust
/// use actix_web::App;
/// use actix_files::Files;
///
/// let app = App::new()
/// .service(Files::new("/static", "."));
/// ```
pub struct Files {
path: String,
directory: PathBuf,
index: Option<String>,
show_index: bool,
redirect_to_slash: bool,
default: Rc<RefCell<Option<Rc<HttpNewService>>>>,
renderer: Rc<DirectoryRenderer>,
mime_override: Option<Rc<MimeOverride>>,
file_flags: named::Flags,
guards: Option<Rc<dyn Guard>>,
hidden_files: bool,
}
impl fmt::Debug for Files {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_str("Files")
}
}
impl Clone for Files {
fn clone(&self) -> Self {
Self {
directory: self.directory.clone(),
index: self.index.clone(),
show_index: self.show_index,
redirect_to_slash: self.redirect_to_slash,
default: self.default.clone(),
renderer: self.renderer.clone(),
file_flags: self.file_flags,
path: self.path.clone(),
mime_override: self.mime_override.clone(),
guards: self.guards.clone(),
hidden_files: self.hidden_files,
}
}
}
impl Files {
/// Create new `Files` instance for a specified base directory.
///
/// # Argument Order
/// The first argument (`mount_path`) is the root URL at which the static files are served.
/// For example, `/assets` will serve files at `example.com/assets/...`.
///
/// The second argument (`serve_from`) is the location on disk at which files are loaded.
/// This can be a relative path. For example, `./` would serve files from the current
/// working directory.
///
/// # Implementation Notes
/// If the mount path is set as the root path `/`, services registered after this one will
/// be inaccessible. Register more specific handlers and services first.
///
/// `Files` uses a threadpool for blocking filesystem operations. By default, the pool uses a
/// number of threads equal to 5x the number of available logical CPUs. Pool size can be changed
/// by setting ACTIX_THREADPOOL environment variable.
pub fn new<T: Into<PathBuf>>(mount_path: &str, serve_from: T) -> Files {
let orig_dir = serve_from.into();
let dir = match orig_dir.canonicalize() {
Ok(canon_dir) => canon_dir,
Err(_) => {
log::error!("Specified path is not a directory: {:?}", orig_dir);
PathBuf::new()
}
};
Files {
path: mount_path.to_owned(),
directory: dir,
index: None,
show_index: false,
redirect_to_slash: false,
default: Rc::new(RefCell::new(None)),
renderer: Rc::new(directory_listing),
mime_override: None,
file_flags: named::Flags::default(),
guards: None,
hidden_files: false,
}
}
/// Show files listing for directories.
///
/// By default show files listing is disabled.
pub fn show_files_listing(mut self) -> Self {
self.show_index = true;
self
}
/// Redirects to a slash-ended path when browsing a directory.
///
/// By default never redirect.
pub fn redirect_to_slash_directory(mut self) -> Self {
self.redirect_to_slash = true;
self
}
/// Set custom directory renderer
pub fn files_listing_renderer<F>(mut self, f: F) -> Self
where
for<'r, 's> F: Fn(&'r Directory, &'s HttpRequest) -> Result<ServiceResponse, io::Error>
+ 'static,
{
self.renderer = Rc::new(f);
self
}
/// Specifies mime override callback
pub fn mime_override<F>(mut self, f: F) -> Self
where
F: Fn(&mime::Name<'_>) -> DispositionType + 'static,
{
self.mime_override = Some(Rc::new(f));
self
}
/// Set index file
///
/// Shows specific index file for directory "/" instead of
/// showing files listing.
pub fn index_file<T: Into<String>>(mut self, index: T) -> Self {
self.index = Some(index.into());
self
}
/// Specifies whether to use ETag or not.
///
/// Default is true.
#[inline]
pub fn use_etag(mut self, value: bool) -> Self {
self.file_flags.set(named::Flags::ETAG, value);
self
}
/// Specifies whether to use Last-Modified or not.
///
/// Default is true.
#[inline]
pub fn use_last_modified(mut self, value: bool) -> Self {
self.file_flags.set(named::Flags::LAST_MD, value);
self
}
/// Specifies whether text responses should signal a UTF-8 encoding.
///
/// Default is false (but will default to true in a future version).
#[inline]
pub fn prefer_utf8(mut self, value: bool) -> Self {
self.file_flags.set(named::Flags::PREFER_UTF8, value);
self
}
/// Specifies custom guards to use for directory listings and files.
///
/// Default behaviour allows GET and HEAD.
#[inline]
pub fn use_guards<G: Guard + 'static>(mut self, guards: G) -> Self {
self.guards = Some(Rc::new(guards));
self
}
/// Disable `Content-Disposition` header.
///
/// By default Content-Disposition` header is enabled.
#[inline]
pub fn disable_content_disposition(mut self) -> Self {
self.file_flags.remove(named::Flags::CONTENT_DISPOSITION);
self
}
/// Sets default handler which is used when no matched file could be found.
pub fn default_handler<F, U>(mut self, f: F) -> Self
where
F: IntoServiceFactory<U>,
U: ServiceFactory<
Config = (),
Request = ServiceRequest,
Response = ServiceResponse,
Error = Error,
> + 'static,
{
// create and configure default resource
self.default = Rc::new(RefCell::new(Some(Rc::new(boxed::factory(
f.into_factory().map_init_err(|_| ()),
)))));
self
}
/// Enables serving hidden files and directories, allowing a leading dots in url fragments.
#[inline]
pub fn use_hidden_files(mut self) -> Self {
self.hidden_files = true;
self
}
}
impl HttpServiceFactory for Files {
fn register(self, config: &mut AppService) {
if self.default.borrow().is_none() {
*self.default.borrow_mut() = Some(config.default_service());
}
let rdef = if config.is_root() {
ResourceDef::root_prefix(&self.path)
} else {
ResourceDef::prefix(&self.path)
};
config.register_service(rdef, None, self, None)
}
}
impl ServiceFactory for Files {
type Request = ServiceRequest;
type Response = ServiceResponse;
type Error = Error;
type Config = ();
type Service = FilesService;
type InitError = ();
type Future = LocalBoxFuture<'static, Result<Self::Service, Self::InitError>>;
fn new_service(&self, _: ()) -> Self::Future {
let mut srv = FilesService {
directory: self.directory.clone(),
index: self.index.clone(),
show_index: self.show_index,
redirect_to_slash: self.redirect_to_slash,
default: None,
renderer: self.renderer.clone(),
mime_override: self.mime_override.clone(),
file_flags: self.file_flags,
guards: self.guards.clone(),
hidden_files: self.hidden_files,
};
if let Some(ref default) = *self.default.borrow() {
default
.new_service(())
.map(move |result| match result {
Ok(default) => {
srv.default = Some(default);
Ok(srv)
}
Err(_) => Err(()),
})
.boxed_local()
} else {
ok(srv).boxed_local()
}
}
}

View File

@ -1,42 +1,49 @@
#![allow(clippy::borrow_interior_mutable_const, clippy::type_complexity)] //! Static file serving for Actix Web.
//!
//! Provides a non-blocking service for serving static files from disk.
//!
//! # Example
//! ```rust
//! use actix_web::App;
//! use actix_files::Files;
//!
//! let app = App::new()
//! .service(Files::new("/static", ".").prefer_utf8(true));
//! ```
//! Static files support #![deny(rust_2018_idioms)]
use std::cell::RefCell; #![warn(missing_docs, missing_debug_implementations)]
use std::fmt::Write;
use std::fs::{DirEntry, File};
use std::future::Future;
use std::io::{Read, Seek};
use std::path::{Path, PathBuf};
use std::pin::Pin;
use std::rc::Rc;
use std::task::{Context, Poll};
use std::{cmp, io};
use actix_service::boxed::{self, BoxService, BoxServiceFactory}; use std::io;
use actix_service::{IntoServiceFactory, Service, ServiceFactory};
use actix_web::dev::{ use actix_service::boxed::{BoxService, BoxServiceFactory};
AppService, HttpServiceFactory, Payload, ResourceDef, ServiceRequest, use actix_web::{
ServiceResponse, dev::{ServiceRequest, ServiceResponse},
error::{BlockingError, Error, ErrorInternalServerError},
http::header::DispositionType,
}; };
use actix_web::error::{BlockingError, Error, ErrorInternalServerError};
use actix_web::guard::Guard;
use actix_web::http::header::{self, DispositionType};
use actix_web::http::Method;
use actix_web::{web, FromRequest, HttpRequest, HttpResponse};
use bytes::Bytes;
use futures_core::Stream;
use futures_util::future::{ok, ready, Either, FutureExt, LocalBoxFuture, Ready};
use mime_guess::from_ext; use mime_guess::from_ext;
use percent_encoding::{utf8_percent_encode, CONTROLS};
use v_htmlescape::escape as escape_html_entity;
mod chunked;
mod directory;
mod encoding;
mod error; mod error;
mod files;
mod named; mod named;
mod path_buf;
mod range; mod range;
mod service;
use self::error::{FilesError, UriSegmentError}; pub use crate::chunked::ChunkedReadFile;
pub use crate::directory::Directory;
pub use crate::files::Files;
pub use crate::named::NamedFile; pub use crate::named::NamedFile;
pub use crate::range::HttpRange; pub use crate::range::HttpRange;
pub use crate::service::FilesService;
use self::directory::{directory_listing, DirectoryRenderer};
use self::error::FilesError;
use self::path_buf::PathBufWrap;
type HttpService = BoxService<ServiceRequest, ServiceResponse, Error>; type HttpService = BoxService<ServiceRequest, ServiceResponse, Error>;
type HttpNewService = BoxServiceFactory<(), ServiceRequest, ServiceResponse, Error, ()>; type HttpNewService = BoxServiceFactory<(), ServiceRequest, ServiceResponse, Error, ()>;
@ -49,614 +56,43 @@ pub fn file_extension_to_mime(ext: &str) -> mime::Mime {
from_ext(ext).first_or_octet_stream() from_ext(ext).first_or_octet_stream()
} }
fn handle_error(err: BlockingError<io::Error>) -> Error { pub(crate) fn handle_error(err: BlockingError<io::Error>) -> Error {
match err { match err {
BlockingError::Error(err) => err.into(), BlockingError::Error(err) => err.into(),
BlockingError::Canceled => ErrorInternalServerError("Unexpected error"), BlockingError::Canceled => ErrorInternalServerError("Unexpected error"),
} }
} }
#[doc(hidden)]
/// A helper created from a `std::fs::File` which reads the file
/// chunk-by-chunk on a `ThreadPool`.
pub struct ChunkedReadFile {
size: u64,
offset: u64,
file: Option<File>,
fut:
Option<LocalBoxFuture<'static, Result<(File, Bytes), BlockingError<io::Error>>>>,
counter: u64,
}
impl Stream for ChunkedReadFile { type MimeOverride = dyn Fn(&mime::Name<'_>) -> DispositionType;
type Item = Result<Bytes, Error>;
fn poll_next(
mut self: Pin<&mut Self>,
cx: &mut Context,
) -> Poll<Option<Self::Item>> {
if let Some(ref mut fut) = self.fut {
return match Pin::new(fut).poll(cx) {
Poll::Ready(Ok((file, bytes))) => {
self.fut.take();
self.file = Some(file);
self.offset += bytes.len() as u64;
self.counter += bytes.len() as u64;
Poll::Ready(Some(Ok(bytes)))
}
Poll::Ready(Err(e)) => Poll::Ready(Some(Err(handle_error(e)))),
Poll::Pending => Poll::Pending,
};
}
let size = self.size;
let offset = self.offset;
let counter = self.counter;
if size == counter {
Poll::Ready(None)
} else {
let mut file = self.file.take().expect("Use after completion");
self.fut = Some(
web::block(move || {
let max_bytes: usize;
max_bytes = cmp::min(size.saturating_sub(counter), 65_536) as usize;
let mut buf = Vec::with_capacity(max_bytes);
file.seek(io::SeekFrom::Start(offset))?;
let nbytes =
file.by_ref().take(max_bytes as u64).read_to_end(&mut buf)?;
if nbytes == 0 {
return Err(io::ErrorKind::UnexpectedEof.into());
}
Ok((file, Bytes::from(buf)))
})
.boxed_local(),
);
self.poll_next(cx)
}
}
}
type DirectoryRenderer =
dyn Fn(&Directory, &HttpRequest) -> Result<ServiceResponse, io::Error>;
/// A directory; responds with the generated directory listing.
#[derive(Debug)]
pub struct Directory {
/// Base directory
pub base: PathBuf,
/// Path of subdirectory to generate listing for
pub path: PathBuf,
}
impl Directory {
/// Create a new directory
pub fn new(base: PathBuf, path: PathBuf) -> Directory {
Directory { base, path }
}
/// Is this entry visible from this directory?
pub fn is_visible(&self, entry: &io::Result<DirEntry>) -> bool {
if let Ok(ref entry) = *entry {
if let Some(name) = entry.file_name().to_str() {
if name.starts_with('.') {
return false;
}
}
if let Ok(ref md) = entry.metadata() {
let ft = md.file_type();
return ft.is_dir() || ft.is_file() || ft.is_symlink();
}
}
false
}
}
// show file url as relative to static path
macro_rules! encode_file_url {
($path:ident) => {
utf8_percent_encode(&$path, CONTROLS)
};
}
// " -- &quot; & -- &amp; ' -- &#x27; < -- &lt; > -- &gt; / -- &#x2f;
macro_rules! encode_file_name {
($entry:ident) => {
escape_html_entity(&$entry.file_name().to_string_lossy())
};
}
fn directory_listing(
dir: &Directory,
req: &HttpRequest,
) -> Result<ServiceResponse, io::Error> {
let index_of = format!("Index of {}", req.path());
let mut body = String::new();
let base = Path::new(req.path());
for entry in dir.path.read_dir()? {
if dir.is_visible(&entry) {
let entry = entry.unwrap();
let p = match entry.path().strip_prefix(&dir.path) {
Ok(p) if cfg!(windows) => {
base.join(p).to_string_lossy().replace("\\", "/")
}
Ok(p) => base.join(p).to_string_lossy().into_owned(),
Err(_) => continue,
};
// if file is a directory, add '/' to the end of the name
if let Ok(metadata) = entry.metadata() {
if metadata.is_dir() {
let _ = write!(
body,
"<li><a href=\"{}\">{}/</a></li>",
encode_file_url!(p),
encode_file_name!(entry),
);
} else {
let _ = write!(
body,
"<li><a href=\"{}\">{}</a></li>",
encode_file_url!(p),
encode_file_name!(entry),
);
}
} else {
continue;
}
}
}
let html = format!(
"<html>\
<head><title>{}</title></head>\
<body><h1>{}</h1>\
<ul>\
{}\
</ul></body>\n</html>",
index_of, index_of, body
);
Ok(ServiceResponse::new(
req.clone(),
HttpResponse::Ok()
.content_type("text/html; charset=utf-8")
.body(html),
))
}
type MimeOverride = dyn Fn(&mime::Name) -> DispositionType;
/// Static files handling
///
/// `Files` service must be registered with `App::service()` method.
///
/// ```rust
/// use actix_web::App;
/// use actix_files as fs;
///
/// fn main() {
/// let app = App::new()
/// .service(fs::Files::new("/static", "."));
/// }
/// ```
pub struct Files {
path: String,
directory: PathBuf,
index: Option<String>,
show_index: bool,
redirect_to_slash: bool,
default: Rc<RefCell<Option<Rc<HttpNewService>>>>,
renderer: Rc<DirectoryRenderer>,
mime_override: Option<Rc<MimeOverride>>,
file_flags: named::Flags,
// FIXME: Should re-visit later.
#[allow(clippy::redundant_allocation)]
guards: Option<Rc<Box<dyn Guard>>>,
}
impl Clone for Files {
fn clone(&self) -> Self {
Self {
directory: self.directory.clone(),
index: self.index.clone(),
show_index: self.show_index,
redirect_to_slash: self.redirect_to_slash,
default: self.default.clone(),
renderer: self.renderer.clone(),
file_flags: self.file_flags,
path: self.path.clone(),
mime_override: self.mime_override.clone(),
guards: self.guards.clone(),
}
}
}
impl Files {
/// Create new `Files` instance for specified base directory.
///
/// `File` uses `ThreadPool` for blocking filesystem operations.
/// By default pool with 5x threads of available cpus is used.
/// Pool size can be changed by setting ACTIX_THREADPOOL environment variable.
pub fn new<T: Into<PathBuf>>(path: &str, dir: T) -> Files {
let orig_dir = dir.into();
let dir = match orig_dir.canonicalize() {
Ok(canon_dir) => canon_dir,
Err(_) => {
log::error!("Specified path is not a directory: {:?}", orig_dir);
PathBuf::new()
}
};
Files {
path: path.to_string(),
directory: dir,
index: None,
show_index: false,
redirect_to_slash: false,
default: Rc::new(RefCell::new(None)),
renderer: Rc::new(directory_listing),
mime_override: None,
file_flags: named::Flags::default(),
guards: None,
}
}
/// Show files listing for directories.
///
/// By default show files listing is disabled.
pub fn show_files_listing(mut self) -> Self {
self.show_index = true;
self
}
/// Redirects to a slash-ended path when browsing a directory.
///
/// By default never redirect.
pub fn redirect_to_slash_directory(mut self) -> Self {
self.redirect_to_slash = true;
self
}
/// Set custom directory renderer
pub fn files_listing_renderer<F>(mut self, f: F) -> Self
where
for<'r, 's> F: Fn(&'r Directory, &'s HttpRequest) -> Result<ServiceResponse, io::Error>
+ 'static,
{
self.renderer = Rc::new(f);
self
}
/// Specifies mime override callback
pub fn mime_override<F>(mut self, f: F) -> Self
where
F: Fn(&mime::Name) -> DispositionType + 'static,
{
self.mime_override = Some(Rc::new(f));
self
}
/// Set index file
///
/// Shows specific index file for directory "/" instead of
/// showing files listing.
pub fn index_file<T: Into<String>>(mut self, index: T) -> Self {
self.index = Some(index.into());
self
}
#[inline]
/// Specifies whether to use ETag or not.
///
/// Default is true.
pub fn use_etag(mut self, value: bool) -> Self {
self.file_flags.set(named::Flags::ETAG, value);
self
}
#[inline]
/// Specifies whether to use Last-Modified or not.
///
/// Default is true.
pub fn use_last_modified(mut self, value: bool) -> Self {
self.file_flags.set(named::Flags::LAST_MD, value);
self
}
/// Specifies custom guards to use for directory listings and files.
///
/// Default behaviour allows GET and HEAD.
#[inline]
pub fn use_guards<G: Guard + 'static>(mut self, guards: G) -> Self {
self.guards = Some(Rc::new(Box::new(guards)));
self
}
/// Disable `Content-Disposition` header.
///
/// By default Content-Disposition` header is enabled.
#[inline]
pub fn disable_content_disposition(mut self) -> Self {
self.file_flags.remove(named::Flags::CONTENT_DISPOSITION);
self
}
/// Sets default handler which is used when no matched file could be found.
pub fn default_handler<F, U>(mut self, f: F) -> Self
where
F: IntoServiceFactory<U>,
U: ServiceFactory<
Config = (),
Request = ServiceRequest,
Response = ServiceResponse,
Error = Error,
> + 'static,
{
// create and configure default resource
self.default = Rc::new(RefCell::new(Some(Rc::new(boxed::factory(
f.into_factory().map_init_err(|_| ()),
)))));
self
}
}
impl HttpServiceFactory for Files {
fn register(self, config: &mut AppService) {
if self.default.borrow().is_none() {
*self.default.borrow_mut() = Some(config.default_service());
}
let rdef = if config.is_root() {
ResourceDef::root_prefix(&self.path)
} else {
ResourceDef::prefix(&self.path)
};
config.register_service(rdef, None, self, None)
}
}
impl ServiceFactory for Files {
type Request = ServiceRequest;
type Response = ServiceResponse;
type Error = Error;
type Config = ();
type Service = FilesService;
type InitError = ();
type Future = LocalBoxFuture<'static, Result<Self::Service, Self::InitError>>;
fn new_service(&self, _: ()) -> Self::Future {
let mut srv = FilesService {
directory: self.directory.clone(),
index: self.index.clone(),
show_index: self.show_index,
redirect_to_slash: self.redirect_to_slash,
default: None,
renderer: self.renderer.clone(),
mime_override: self.mime_override.clone(),
file_flags: self.file_flags,
guards: self.guards.clone(),
};
if let Some(ref default) = *self.default.borrow() {
default
.new_service(())
.map(move |result| match result {
Ok(default) => {
srv.default = Some(default);
Ok(srv)
}
Err(_) => Err(()),
})
.boxed_local()
} else {
ok(srv).boxed_local()
}
}
}
pub struct FilesService {
directory: PathBuf,
index: Option<String>,
show_index: bool,
redirect_to_slash: bool,
default: Option<HttpService>,
renderer: Rc<DirectoryRenderer>,
mime_override: Option<Rc<MimeOverride>>,
file_flags: named::Flags,
// FIXME: Should re-visit later.
#[allow(clippy::redundant_allocation)]
guards: Option<Rc<Box<dyn Guard>>>,
}
impl FilesService {
fn handle_err(
&mut self,
e: io::Error,
req: ServiceRequest,
) -> Either<
Ready<Result<ServiceResponse, Error>>,
LocalBoxFuture<'static, Result<ServiceResponse, Error>>,
> {
log::debug!("Files: Failed to handle {}: {}", req.path(), e);
if let Some(ref mut default) = self.default {
Either::Right(default.call(req))
} else {
Either::Left(ok(req.error_response(e)))
}
}
}
impl Service for FilesService {
type Request = ServiceRequest;
type Response = ServiceResponse;
type Error = Error;
type Future = Either<
Ready<Result<Self::Response, Self::Error>>,
LocalBoxFuture<'static, Result<Self::Response, Self::Error>>,
>;
fn poll_ready(&mut self, _: &mut Context) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, req: ServiceRequest) -> Self::Future {
let is_method_valid = if let Some(guard) = &self.guards {
// execute user defined guards
(**guard).check(req.head())
} else {
// default behavior
matches!(*req.method(), Method::HEAD | Method::GET)
};
if !is_method_valid {
return Either::Left(ok(req.into_response(
actix_web::HttpResponse::MethodNotAllowed()
.header(header::CONTENT_TYPE, "text/plain")
.body("Request did not meet this resource's requirements."),
)));
}
let real_path = match PathBufWrp::get_pathbuf(req.match_info().path()) {
Ok(item) => item,
Err(e) => return Either::Left(ok(req.error_response(e))),
};
// full file path
let path = match self.directory.join(&real_path.0).canonicalize() {
Ok(path) => path,
Err(e) => return self.handle_err(e, req),
};
if path.is_dir() {
if let Some(ref redir_index) = self.index {
if self.redirect_to_slash && !req.path().ends_with('/') {
let redirect_to = format!("{}/", req.path());
return Either::Left(ok(req.into_response(
HttpResponse::Found()
.header(header::LOCATION, redirect_to)
.body("")
.into_body(),
)));
}
let path = path.join(redir_index);
match NamedFile::open(path) {
Ok(mut named_file) => {
if let Some(ref mime_override) = self.mime_override {
let new_disposition =
mime_override(&named_file.content_type.type_());
named_file.content_disposition.disposition = new_disposition;
}
named_file.flags = self.file_flags;
let (req, _) = req.into_parts();
Either::Left(ok(match named_file.into_response(&req) {
Ok(item) => ServiceResponse::new(req, item),
Err(e) => ServiceResponse::from_err(e, req),
}))
}
Err(e) => self.handle_err(e, req),
}
} else if self.show_index {
let dir = Directory::new(self.directory.clone(), path);
let (req, _) = req.into_parts();
let x = (self.renderer)(&dir, &req);
match x {
Ok(resp) => Either::Left(ok(resp)),
Err(e) => Either::Left(ok(ServiceResponse::from_err(e, req))),
}
} else {
Either::Left(ok(ServiceResponse::from_err(
FilesError::IsDirectory,
req.into_parts().0,
)))
}
} else {
match NamedFile::open(path) {
Ok(mut named_file) => {
if let Some(ref mime_override) = self.mime_override {
let new_disposition =
mime_override(&named_file.content_type.type_());
named_file.content_disposition.disposition = new_disposition;
}
named_file.flags = self.file_flags;
let (req, _) = req.into_parts();
match named_file.into_response(&req) {
Ok(item) => {
Either::Left(ok(ServiceResponse::new(req.clone(), item)))
}
Err(e) => Either::Left(ok(ServiceResponse::from_err(e, req))),
}
}
Err(e) => self.handle_err(e, req),
}
}
}
}
#[derive(Debug)]
struct PathBufWrp(PathBuf);
impl PathBufWrp {
fn get_pathbuf(path: &str) -> Result<Self, UriSegmentError> {
let mut buf = PathBuf::new();
for segment in path.split('/') {
if segment == ".." {
buf.pop();
} else if segment.starts_with('.') {
return Err(UriSegmentError::BadStart('.'));
} else if segment.starts_with('*') {
return Err(UriSegmentError::BadStart('*'));
} else if segment.ends_with(':') {
return Err(UriSegmentError::BadEnd(':'));
} else if segment.ends_with('>') {
return Err(UriSegmentError::BadEnd('>'));
} else if segment.ends_with('<') {
return Err(UriSegmentError::BadEnd('<'));
} else if segment.is_empty() {
continue;
} else if cfg!(windows) && segment.contains('\\') {
return Err(UriSegmentError::BadChar('\\'));
} else {
buf.push(segment)
}
}
Ok(PathBufWrp(buf))
}
}
impl FromRequest for PathBufWrp {
type Error = UriSegmentError;
type Future = Ready<Result<Self, Self::Error>>;
type Config = ();
fn from_request(req: &HttpRequest, _: &mut Payload) -> Self::Future {
ready(PathBufWrp::get_pathbuf(req.match_info().path()))
}
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::fs; use std::{
use std::iter::FromIterator; fs::{self, File},
use std::ops::Add; ops::Add,
use std::time::{Duration, SystemTime}; time::{Duration, SystemTime},
};
use actix_service::ServiceFactory;
use actix_web::{
guard,
http::{
header::{self, ContentDisposition, DispositionParam, DispositionType},
Method, StatusCode,
},
middleware::Compress,
test::{self, TestRequest},
web, App, HttpResponse, Responder,
};
use futures_util::future::ok;
use super::*; use super::*;
use actix_web::guard;
use actix_web::http::header::{
self, ContentDisposition, DispositionParam, DispositionType,
};
use actix_web::http::{Method, StatusCode};
use actix_web::middleware::Compress;
use actix_web::test::{self, TestRequest};
use actix_web::{App, Responder};
#[actix_rt::test] #[actix_rt::test]
async fn test_file_extension_to_mime() { async fn test_file_extension_to_mime() {
let m = file_extension_to_mime("");
assert_eq!(m, mime::APPLICATION_OCTET_STREAM);
let m = file_extension_to_mime("jpg"); let m = file_extension_to_mime("jpg");
assert_eq!(m, mime::IMAGE_JPEG); assert_eq!(m, mime::IMAGE_JPEG);
@ -898,7 +334,7 @@ mod tests {
#[actix_rt::test] #[actix_rt::test]
async fn test_mime_override() { async fn test_mime_override() {
fn all_attachment(_: &mime::Name) -> DispositionType { fn all_attachment(_: &mime::Name<'_>) -> DispositionType {
DispositionType::Attachment DispositionType::Attachment
} }
@ -1010,7 +446,7 @@ mod tests {
// Check file contents // Check file contents
let bytes = response.body().await.unwrap(); let bytes = response.body().await.unwrap();
let data = Bytes::from(fs::read("tests/test.binary").unwrap()); let data = web::Bytes::from(fs::read("tests/test.binary").unwrap());
assert_eq!(bytes, data); assert_eq!(bytes, data);
} }
@ -1043,7 +479,7 @@ mod tests {
assert_eq!(response.status(), StatusCode::OK); assert_eq!(response.status(), StatusCode::OK);
let bytes = test::read_body(response).await; let bytes = test::read_body(response).await;
let data = Bytes::from(fs::read("tests/test space.binary").unwrap()); let data = web::Bytes::from(fs::read("tests/test space.binary").unwrap());
assert_eq!(bytes, data); assert_eq!(bytes, data);
} }
@ -1221,7 +657,7 @@ mod tests {
let resp = test::call_service(&mut st, req).await; let resp = test::call_service(&mut st, req).await;
assert_eq!(resp.status(), StatusCode::OK); assert_eq!(resp.status(), StatusCode::OK);
let bytes = test::read_body(resp).await; let bytes = test::read_body(resp).await;
assert_eq!(bytes, Bytes::from_static(b"default content")); assert_eq!(bytes, web::Bytes::from_static(b"default content"));
} }
// #[actix_rt::test] // #[actix_rt::test]
@ -1337,36 +773,4 @@ mod tests {
// let response = srv.execute(request.send()).unwrap(); // let response = srv.execute(request.send()).unwrap();
// assert_eq!(response.status(), StatusCode::OK); // assert_eq!(response.status(), StatusCode::OK);
// } // }
#[actix_rt::test]
async fn test_path_buf() {
assert_eq!(
PathBufWrp::get_pathbuf("/test/.tt").map(|t| t.0),
Err(UriSegmentError::BadStart('.'))
);
assert_eq!(
PathBufWrp::get_pathbuf("/test/*tt").map(|t| t.0),
Err(UriSegmentError::BadStart('*'))
);
assert_eq!(
PathBufWrp::get_pathbuf("/test/tt:").map(|t| t.0),
Err(UriSegmentError::BadEnd(':'))
);
assert_eq!(
PathBufWrp::get_pathbuf("/test/tt<").map(|t| t.0),
Err(UriSegmentError::BadEnd('<'))
);
assert_eq!(
PathBufWrp::get_pathbuf("/test/tt>").map(|t| t.0),
Err(UriSegmentError::BadEnd('>'))
);
assert_eq!(
PathBufWrp::get_pathbuf("/seg1/seg2/").unwrap().0,
PathBuf::from_iter(vec!["seg1", "seg2"])
);
assert_eq!(
PathBufWrp::get_pathbuf("/seg1/../seg2/").unwrap().0,
PathBuf::from_iter(vec!["seg2"])
);
}
} }

View File

@ -7,32 +7,36 @@ use std::time::{SystemTime, UNIX_EPOCH};
#[cfg(unix)] #[cfg(unix)]
use std::os::unix::fs::MetadataExt; use std::os::unix::fs::MetadataExt;
use actix_web::{
dev::{BodyEncoding, SizedStream},
http::{
header::{
self, Charset, ContentDisposition, DispositionParam, DispositionType,
ExtendedValue,
},
ContentEncoding, StatusCode,
},
Error, HttpMessage, HttpRequest, HttpResponse, Responder,
};
use bitflags::bitflags; use bitflags::bitflags;
use futures_util::future::{ready, Ready};
use mime_guess::from_path; use mime_guess::from_path;
use actix_http::body::SizedStream;
use actix_web::dev::BodyEncoding;
use actix_web::http::header::{
self, Charset, ContentDisposition, DispositionParam, DispositionType, ExtendedValue,
};
use actix_web::http::{ContentEncoding, StatusCode};
use actix_web::{Error, HttpMessage, HttpRequest, HttpResponse, Responder};
use futures_util::future::{ready, Ready};
use crate::range::HttpRange;
use crate::ChunkedReadFile; use crate::ChunkedReadFile;
use crate::{encoding::equiv_utf8_text, range::HttpRange};
bitflags! { bitflags! {
pub(crate) struct Flags: u8 { pub(crate) struct Flags: u8 {
const ETAG = 0b0000_0001; const ETAG = 0b0000_0001;
const LAST_MD = 0b0000_0010; const LAST_MD = 0b0000_0010;
const CONTENT_DISPOSITION = 0b0000_0100; const CONTENT_DISPOSITION = 0b0000_0100;
const PREFER_UTF8 = 0b0000_1000;
} }
} }
impl Default for Flags { impl Default for Flags {
fn default() -> Self { fn default() -> Self {
Flags::all() Flags::from_bits_truncate(0b0000_0111)
} }
} }
@ -89,12 +93,15 @@ impl NamedFile {
}; };
let ct = from_path(&path).first_or_octet_stream(); let ct = from_path(&path).first_or_octet_stream();
let disposition = match ct.type_() { let disposition = match ct.type_() {
mime::IMAGE | mime::TEXT | mime::VIDEO => DispositionType::Inline, mime::IMAGE | mime::TEXT | mime::VIDEO => DispositionType::Inline,
_ => DispositionType::Attachment, _ => DispositionType::Attachment,
}; };
let mut parameters = let mut parameters =
vec![DispositionParam::Filename(String::from(filename.as_ref()))]; vec![DispositionParam::Filename(String::from(filename.as_ref()))];
if !filename.is_ascii() { if !filename.is_ascii() {
parameters.push(DispositionParam::FilenameExt(ExtendedValue { parameters.push(DispositionParam::FilenameExt(ExtendedValue {
charset: Charset::Ext(String::from("UTF-8")), charset: Charset::Ext(String::from("UTF-8")),
@ -102,16 +109,19 @@ impl NamedFile {
value: filename.into_owned().into_bytes(), value: filename.into_owned().into_bytes(),
})) }))
} }
let cd = ContentDisposition { let cd = ContentDisposition {
disposition, disposition,
parameters, parameters,
}; };
(ct, cd) (ct, cd)
}; };
let md = file.metadata()?; let md = file.metadata()?;
let modified = md.modified().ok(); let modified = md.modified().ok();
let encoding = None; let encoding = None;
Ok(NamedFile { Ok(NamedFile {
path, path,
file, file,
@ -183,7 +193,7 @@ impl NamedFile {
/// image, and video content types, and `attachment` otherwise, and /// image, and video content types, and `attachment` otherwise, and
/// the filename is taken from the path provided in the `open` method /// the filename is taken from the path provided in the `open` method
/// after converting it to UTF-8 using. /// after converting it to UTF-8 using.
/// [to_string_lossy](https://doc.rust-lang.org/std/ffi/struct.OsStr.html#method.to_string_lossy). /// [`std::ffi::OsStr::to_string_lossy`]
#[inline] #[inline]
pub fn set_content_disposition(mut self, cd: header::ContentDisposition) -> Self { pub fn set_content_disposition(mut self, cd: header::ContentDisposition) -> Self {
self.content_disposition = cd; self.content_disposition = cd;
@ -207,24 +217,33 @@ impl NamedFile {
self self
} }
#[inline] /// Specifies whether to use ETag or not.
///Specifies whether to use ETag or not.
/// ///
///Default is true. /// Default is true.
#[inline]
pub fn use_etag(mut self, value: bool) -> Self { pub fn use_etag(mut self, value: bool) -> Self {
self.flags.set(Flags::ETAG, value); self.flags.set(Flags::ETAG, value);
self self
} }
#[inline] /// Specifies whether to use Last-Modified or not.
///Specifies whether to use Last-Modified or not.
/// ///
///Default is true. /// Default is true.
#[inline]
pub fn use_last_modified(mut self, value: bool) -> Self { pub fn use_last_modified(mut self, value: bool) -> Self {
self.flags.set(Flags::LAST_MD, value); self.flags.set(Flags::LAST_MD, value);
self self
} }
/// Specifies whether text responses should signal a UTF-8 encoding.
///
/// Default is false (but will default to true in a future version).
#[inline]
pub fn prefer_utf8(mut self, value: bool) -> Self {
self.flags.set(Flags::PREFER_UTF8, value);
self
}
pub(crate) fn etag(&self) -> Option<header::EntityTag> { pub(crate) fn etag(&self) -> Option<header::EntityTag> {
// This etag format is similar to Apache's. // This etag format is similar to Apache's.
self.modified.as_ref().map(|mtime| { self.modified.as_ref().map(|mtime| {
@ -242,6 +261,7 @@ impl NamedFile {
let dur = mtime let dur = mtime
.duration_since(UNIX_EPOCH) .duration_since(UNIX_EPOCH)
.expect("modification time must be after epoch"); .expect("modification time must be after epoch");
header::EntityTag::strong(format!( header::EntityTag::strong(format!(
"{:x}:{:x}:{:x}:{:x}", "{:x}:{:x}:{:x}:{:x}",
ino, ino,
@ -256,19 +276,29 @@ impl NamedFile {
self.modified.map(|mtime| mtime.into()) self.modified.map(|mtime| mtime.into())
} }
/// Creates an `HttpResponse` with file as a streaming body.
pub fn into_response(self, req: &HttpRequest) -> Result<HttpResponse, Error> { pub fn into_response(self, req: &HttpRequest) -> Result<HttpResponse, Error> {
if self.status_code != StatusCode::OK { if self.status_code != StatusCode::OK {
let mut resp = HttpResponse::build(self.status_code); let mut res = HttpResponse::build(self.status_code);
resp.set(header::ContentType(self.content_type.clone()))
.if_true(self.flags.contains(Flags::CONTENT_DISPOSITION), |res| { if self.flags.contains(Flags::PREFER_UTF8) {
res.header( let ct = equiv_utf8_text(self.content_type.clone());
header::CONTENT_DISPOSITION, res.header(header::CONTENT_TYPE, ct.to_string());
self.content_disposition.to_string(), } else {
); res.header(header::CONTENT_TYPE, self.content_type.to_string());
});
if let Some(current_encoding) = self.encoding {
resp.encoding(current_encoding);
} }
if self.flags.contains(Flags::CONTENT_DISPOSITION) {
res.header(
header::CONTENT_DISPOSITION,
self.content_disposition.to_string(),
);
}
if let Some(current_encoding) = self.encoding {
res.encoding(current_encoding);
}
let reader = ChunkedReadFile { let reader = ChunkedReadFile {
size: self.md.len(), size: self.md.len(),
offset: 0, offset: 0,
@ -276,7 +306,8 @@ impl NamedFile {
fut: None, fut: None,
counter: 0, counter: 0,
}; };
return Ok(resp.streaming(reader));
return Ok(res.streaming(reader));
} }
let etag = if self.flags.contains(Flags::ETAG) { let etag = if self.flags.contains(Flags::ETAG) {
@ -284,6 +315,7 @@ impl NamedFile {
} else { } else {
None None
}; };
let last_modified = if self.flags.contains(Flags::LAST_MD) { let last_modified = if self.flags.contains(Flags::LAST_MD) {
self.last_modified() self.last_modified()
} else { } else {
@ -298,6 +330,7 @@ impl NamedFile {
{ {
let t1: SystemTime = m.clone().into(); let t1: SystemTime = m.clone().into();
let t2: SystemTime = since.clone().into(); let t2: SystemTime = since.clone().into();
match (t1.duration_since(UNIX_EPOCH), t2.duration_since(UNIX_EPOCH)) { match (t1.duration_since(UNIX_EPOCH), t2.duration_since(UNIX_EPOCH)) {
(Ok(t1), Ok(t2)) => t1 > t2, (Ok(t1), Ok(t2)) => t1 > t2,
_ => false, _ => false,
@ -309,13 +342,14 @@ impl NamedFile {
// check last modified // check last modified
let not_modified = if !none_match(etag.as_ref(), req) { let not_modified = if !none_match(etag.as_ref(), req) {
true true
} else if req.headers().contains_key(&header::IF_NONE_MATCH) { } else if req.headers().contains_key(header::IF_NONE_MATCH) {
false false
} else if let (Some(ref m), Some(header::IfModifiedSince(ref since))) = } else if let (Some(ref m), Some(header::IfModifiedSince(ref since))) =
(last_modified, req.get_header()) (last_modified, req.get_header())
{ {
let t1: SystemTime = m.clone().into(); let t1: SystemTime = m.clone().into();
let t2: SystemTime = since.clone().into(); let t2: SystemTime = since.clone().into();
match (t1.duration_since(UNIX_EPOCH), t2.duration_since(UNIX_EPOCH)) { match (t1.duration_since(UNIX_EPOCH), t2.duration_since(UNIX_EPOCH)) {
(Ok(t1), Ok(t2)) => t1 <= t2, (Ok(t1), Ok(t2)) => t1 <= t2,
_ => false, _ => false,
@ -325,24 +359,33 @@ impl NamedFile {
}; };
let mut resp = HttpResponse::build(self.status_code); let mut resp = HttpResponse::build(self.status_code);
resp.set(header::ContentType(self.content_type.clone()))
.if_true(self.flags.contains(Flags::CONTENT_DISPOSITION), |res| { if self.flags.contains(Flags::PREFER_UTF8) {
res.header( let ct = equiv_utf8_text(self.content_type.clone());
header::CONTENT_DISPOSITION, resp.header(header::CONTENT_TYPE, ct.to_string());
self.content_disposition.to_string(), } else {
); resp.header(header::CONTENT_TYPE, self.content_type.to_string());
}); }
if self.flags.contains(Flags::CONTENT_DISPOSITION) {
resp.header(
header::CONTENT_DISPOSITION,
self.content_disposition.to_string(),
);
}
// default compressing // default compressing
if let Some(current_encoding) = self.encoding { if let Some(current_encoding) = self.encoding {
resp.encoding(current_encoding); resp.encoding(current_encoding);
} }
resp.if_some(last_modified, |lm, resp| { if let Some(lm) = last_modified {
resp.set(header::LastModified(lm)); resp.header(header::LAST_MODIFIED, lm.to_string());
}) }
.if_some(etag, |etag, resp| {
resp.set(header::ETag(etag)); if let Some(etag) = etag {
}); resp.header(header::ETAG, etag.to_string());
}
resp.header(header::ACCEPT_RANGES, "bytes"); resp.header(header::ACCEPT_RANGES, "bytes");
@ -350,11 +393,12 @@ impl NamedFile {
let mut offset = 0; let mut offset = 0;
// check for range header // check for range header
if let Some(ranges) = req.headers().get(&header::RANGE) { if let Some(ranges) = req.headers().get(header::RANGE) {
if let Ok(rangesheader) = ranges.to_str() { if let Ok(ranges_header) = ranges.to_str() {
if let Ok(rangesvec) = HttpRange::parse(rangesheader, length) { if let Ok(ranges) = HttpRange::parse(ranges_header, length) {
length = rangesvec[0].length; length = ranges[0].length;
offset = rangesvec[0].start; offset = ranges[0].start;
resp.encoding(ContentEncoding::Identity); resp.encoding(ContentEncoding::Identity);
resp.header( resp.header(
header::CONTENT_RANGE, header::CONTENT_RANGE,
@ -414,6 +458,7 @@ impl DerefMut for NamedFile {
fn any_match(etag: Option<&header::EntityTag>, req: &HttpRequest) -> bool { fn any_match(etag: Option<&header::EntityTag>, req: &HttpRequest) -> bool {
match req.get_header::<header::IfMatch>() { match req.get_header::<header::IfMatch>() {
None | Some(header::IfMatch::Any) => true, None | Some(header::IfMatch::Any) => true,
Some(header::IfMatch::Items(ref items)) => { Some(header::IfMatch::Items(ref items)) => {
if let Some(some_etag) = etag { if let Some(some_etag) = etag {
for item in items { for item in items {
@ -422,6 +467,7 @@ fn any_match(etag: Option<&header::EntityTag>, req: &HttpRequest) -> bool {
} }
} }
} }
false false
} }
} }
@ -431,6 +477,7 @@ fn any_match(etag: Option<&header::EntityTag>, req: &HttpRequest) -> bool {
fn none_match(etag: Option<&header::EntityTag>, req: &HttpRequest) -> bool { fn none_match(etag: Option<&header::EntityTag>, req: &HttpRequest) -> bool {
match req.get_header::<header::IfNoneMatch>() { match req.get_header::<header::IfNoneMatch>() {
Some(header::IfNoneMatch::Any) => false, Some(header::IfNoneMatch::Any) => false,
Some(header::IfNoneMatch::Items(ref items)) => { Some(header::IfNoneMatch::Items(ref items)) => {
if let Some(some_etag) = etag { if let Some(some_etag) = etag {
for item in items { for item in items {
@ -439,8 +486,10 @@ fn none_match(etag: Option<&header::EntityTag>, req: &HttpRequest) -> bool {
} }
} }
} }
true true
} }
None => true, None => true,
} }
} }

119
actix-files/src/path_buf.rs Normal file
View File

@ -0,0 +1,119 @@
use std::{
path::{Path, PathBuf},
str::FromStr,
};
use actix_web::{dev::Payload, FromRequest, HttpRequest};
use futures_util::future::{ready, Ready};
use crate::error::UriSegmentError;
#[derive(Debug)]
pub(crate) struct PathBufWrap(PathBuf);
impl FromStr for PathBufWrap {
type Err = UriSegmentError;
fn from_str(path: &str) -> Result<Self, Self::Err> {
Self::parse_path(path, false)
}
}
impl PathBufWrap {
/// Parse a path, giving the choice of allowing hidden files to be considered valid segments.
pub fn parse_path(path: &str, hidden_files: bool) -> Result<Self, UriSegmentError> {
let mut buf = PathBuf::new();
for segment in path.split('/') {
if segment == ".." {
buf.pop();
} else if !hidden_files && segment.starts_with('.') {
return Err(UriSegmentError::BadStart('.'));
} else if segment.starts_with('*') {
return Err(UriSegmentError::BadStart('*'));
} else if segment.ends_with(':') {
return Err(UriSegmentError::BadEnd(':'));
} else if segment.ends_with('>') {
return Err(UriSegmentError::BadEnd('>'));
} else if segment.ends_with('<') {
return Err(UriSegmentError::BadEnd('<'));
} else if segment.is_empty() {
continue;
} else if cfg!(windows) && segment.contains('\\') {
return Err(UriSegmentError::BadChar('\\'));
} else {
buf.push(segment)
}
}
Ok(PathBufWrap(buf))
}
}
impl AsRef<Path> for PathBufWrap {
fn as_ref(&self) -> &Path {
self.0.as_ref()
}
}
impl FromRequest for PathBufWrap {
type Error = UriSegmentError;
type Future = Ready<Result<Self, Self::Error>>;
type Config = ();
fn from_request(req: &HttpRequest, _: &mut Payload) -> Self::Future {
ready(req.match_info().path().parse())
}
}
#[cfg(test)]
mod tests {
use std::iter::FromIterator;
use super::*;
#[test]
fn test_path_buf() {
assert_eq!(
PathBufWrap::from_str("/test/.tt").map(|t| t.0),
Err(UriSegmentError::BadStart('.'))
);
assert_eq!(
PathBufWrap::from_str("/test/*tt").map(|t| t.0),
Err(UriSegmentError::BadStart('*'))
);
assert_eq!(
PathBufWrap::from_str("/test/tt:").map(|t| t.0),
Err(UriSegmentError::BadEnd(':'))
);
assert_eq!(
PathBufWrap::from_str("/test/tt<").map(|t| t.0),
Err(UriSegmentError::BadEnd('<'))
);
assert_eq!(
PathBufWrap::from_str("/test/tt>").map(|t| t.0),
Err(UriSegmentError::BadEnd('>'))
);
assert_eq!(
PathBufWrap::from_str("/seg1/seg2/").unwrap().0,
PathBuf::from_iter(vec!["seg1", "seg2"])
);
assert_eq!(
PathBufWrap::from_str("/seg1/../seg2/").unwrap().0,
PathBuf::from_iter(vec!["seg2"])
);
}
#[test]
fn test_parse_path() {
assert_eq!(
PathBufWrap::parse_path("/test/.tt", false).map(|t| t.0),
Err(UriSegmentError::BadStart('.'))
);
assert_eq!(
PathBufWrap::parse_path("/test/.tt", true).unwrap().0,
PathBuf::from_iter(vec!["test", ".tt"])
);
}
}

View File

@ -1,11 +1,14 @@
/// HTTP Range header representation. /// HTTP Range header representation.
#[derive(Debug, Clone, Copy)] #[derive(Debug, Clone, Copy)]
pub struct HttpRange { pub struct HttpRange {
/// Start of range.
pub start: u64, pub start: u64,
/// Length of range.
pub length: u64, pub length: u64,
} }
static PREFIX: &str = "bytes="; const PREFIX: &str = "bytes=";
const PREFIX_LEN: usize = 6; const PREFIX_LEN: usize = 6;
impl HttpRange { impl HttpRange {

169
actix-files/src/service.rs Normal file
View File

@ -0,0 +1,169 @@
use std::{
fmt, io,
path::PathBuf,
rc::Rc,
task::{Context, Poll},
};
use actix_service::Service;
use actix_web::{
dev::{ServiceRequest, ServiceResponse},
error::Error,
guard::Guard,
http::{header, Method},
HttpResponse,
};
use futures_util::future::{ok, Either, LocalBoxFuture, Ready};
use crate::{
named, Directory, DirectoryRenderer, FilesError, HttpService, MimeOverride,
NamedFile, PathBufWrap,
};
/// Assembled file serving service.
pub struct FilesService {
pub(crate) directory: PathBuf,
pub(crate) index: Option<String>,
pub(crate) show_index: bool,
pub(crate) redirect_to_slash: bool,
pub(crate) default: Option<HttpService>,
pub(crate) renderer: Rc<DirectoryRenderer>,
pub(crate) mime_override: Option<Rc<MimeOverride>>,
pub(crate) file_flags: named::Flags,
pub(crate) guards: Option<Rc<dyn Guard>>,
pub(crate) hidden_files: bool,
}
type FilesServiceFuture = Either<
Ready<Result<ServiceResponse, Error>>,
LocalBoxFuture<'static, Result<ServiceResponse, Error>>,
>;
impl FilesService {
fn handle_err(&mut self, e: io::Error, req: ServiceRequest) -> FilesServiceFuture {
log::debug!("Failed to handle {}: {}", req.path(), e);
if let Some(ref mut default) = self.default {
Either::Right(default.call(req))
} else {
Either::Left(ok(req.error_response(e)))
}
}
}
impl fmt::Debug for FilesService {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_str("FilesService")
}
}
impl Service for FilesService {
type Request = ServiceRequest;
type Response = ServiceResponse;
type Error = Error;
type Future = FilesServiceFuture;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, req: ServiceRequest) -> Self::Future {
let is_method_valid = if let Some(guard) = &self.guards {
// execute user defined guards
(**guard).check(req.head())
} else {
// default behavior
matches!(*req.method(), Method::HEAD | Method::GET)
};
if !is_method_valid {
return Either::Left(ok(req.into_response(
actix_web::HttpResponse::MethodNotAllowed()
.header(header::CONTENT_TYPE, "text/plain")
.body("Request did not meet this resource's requirements."),
)));
}
let real_path =
match PathBufWrap::parse_path(req.match_info().path(), self.hidden_files) {
Ok(item) => item,
Err(e) => return Either::Left(ok(req.error_response(e))),
};
// full file path
let path = match self.directory.join(&real_path).canonicalize() {
Ok(path) => path,
Err(e) => return self.handle_err(e, req),
};
if path.is_dir() {
if let Some(ref redir_index) = self.index {
if self.redirect_to_slash && !req.path().ends_with('/') {
let redirect_to = format!("{}/", req.path());
return Either::Left(ok(req.into_response(
HttpResponse::Found()
.header(header::LOCATION, redirect_to)
.body("")
.into_body(),
)));
}
let path = path.join(redir_index);
match NamedFile::open(path) {
Ok(mut named_file) => {
if let Some(ref mime_override) = self.mime_override {
let new_disposition =
mime_override(&named_file.content_type.type_());
named_file.content_disposition.disposition = new_disposition;
}
named_file.flags = self.file_flags;
let (req, _) = req.into_parts();
Either::Left(ok(match named_file.into_response(&req) {
Ok(item) => ServiceResponse::new(req, item),
Err(e) => ServiceResponse::from_err(e, req),
}))
}
Err(e) => self.handle_err(e, req),
}
} else if self.show_index {
let dir = Directory::new(self.directory.clone(), path);
let (req, _) = req.into_parts();
let x = (self.renderer)(&dir, &req);
match x {
Ok(resp) => Either::Left(ok(resp)),
Err(e) => Either::Left(ok(ServiceResponse::from_err(e, req))),
}
} else {
Either::Left(ok(ServiceResponse::from_err(
FilesError::IsDirectory,
req.into_parts().0,
)))
}
} else {
match NamedFile::open(path) {
Ok(mut named_file) => {
if let Some(ref mime_override) = self.mime_override {
let new_disposition =
mime_override(&named_file.content_type.type_());
named_file.content_disposition.disposition = new_disposition;
}
named_file.flags = self.file_flags;
let (req, _) = req.into_parts();
match named_file.into_response(&req) {
Ok(item) => {
Either::Left(ok(ServiceResponse::new(req.clone(), item)))
}
Err(e) => Either::Left(ok(ServiceResponse::from_err(e, req))),
}
}
Err(e) => self.handle_err(e, req),
}
}
}
}

View File

@ -0,0 +1,40 @@
use actix_files::Files;
use actix_web::{
http::{
header::{self, HeaderValue},
StatusCode,
},
test::{self, TestRequest},
App,
};
#[actix_rt::test]
async fn test_utf8_file_contents() {
// use default ISO-8859-1 encoding
let mut srv =
test::init_service(App::new().service(Files::new("/", "./tests"))).await;
let req = TestRequest::with_uri("/utf8.txt").to_request();
let res = test::call_service(&mut srv, req).await;
assert_eq!(res.status(), StatusCode::OK);
assert_eq!(
res.headers().get(header::CONTENT_TYPE),
Some(&HeaderValue::from_static("text/plain")),
);
// prefer UTF-8 encoding
let mut srv = test::init_service(
App::new().service(Files::new("/", "./tests").prefer_utf8(true)),
)
.await;
let req = TestRequest::with_uri("/utf8.txt").to_request();
let res = test::call_service(&mut srv, req).await;
assert_eq!(res.status(), StatusCode::OK);
assert_eq!(
res.headers().get(header::CONTENT_TYPE),
Some(&HeaderValue::from_static("text/plain; charset=utf-8")),
);
}

View File

@ -0,0 +1,3 @@
中文内容显示正确。
English is OK.

View File

@ -1,3 +0,0 @@
# Framed app for actix web
**This crate has been deprecated and removed.**

View File

@ -1,7 +1,22 @@
# Changes # Changes
## [2.0.0-alpha.1] - 2020-05-23 ## Unreleased - 2020-xx-xx
## 2.1.0 - 2020-11-25
* Add ability to set address for `TestServer`. [#1645]
* Upgrade `base64` to `0.13`.
* Upgrade `serde_urlencoded` to `0.7`. [#1773]
[#1773]: https://github.com/actix/actix-web/pull/1773
[#1645]: https://github.com/actix/actix-web/pull/1645
## 2.0.0 - 2020-09-11
* Update actix-codec and actix-utils dependencies.
## 2.0.0-alpha.1 - 2020-05-23
* Update the `time` dependency to 0.2.7 * Update the `time` dependency to 0.2.7
* Update `actix-connect` dependency to 2.0.0-alpha.2 * Update `actix-connect` dependency to 2.0.0-alpha.2
* Make `test_server` `async` fn. * Make `test_server` `async` fn.
@ -10,74 +25,56 @@
* Update `base64` dependency to 0.12 * Update `base64` dependency to 0.12
* Update `env_logger` dependency to 0.7 * Update `env_logger` dependency to 0.7
## [1.0.0] - 2019-12-13 ## 1.0.0 - 2019-12-13
### Changed
* Replaced `TestServer::start()` with `test_server()` * Replaced `TestServer::start()` with `test_server()`
## [1.0.0-alpha.3] - 2019-12-07 ## 1.0.0-alpha.3 - 2019-12-07
### Changed
* Migrate to `std::future` * Migrate to `std::future`
## [0.2.5] - 2019-09-17 ## 0.2.5 - 2019-09-17
### Changed
* Update serde_urlencoded to "0.6.1" * Update serde_urlencoded to "0.6.1"
* Increase TestServerRuntime timeouts from 500ms to 3000ms * Increase TestServerRuntime timeouts from 500ms to 3000ms
### Fixed
* Do not override current `System` * Do not override current `System`
## [0.2.4] - 2019-07-18 ## 0.2.4 - 2019-07-18
* Update actix-server to 0.6 * Update actix-server to 0.6
## [0.2.3] - 2019-07-16
## 0.2.3 - 2019-07-16
* Add `delete`, `options`, `patch` methods to `TestServerRunner` * Add `delete`, `options`, `patch` methods to `TestServerRunner`
## [0.2.2] - 2019-06-16
## 0.2.2 - 2019-06-16
* Add .put() and .sput() methods * Add .put() and .sput() methods
## [0.2.1] - 2019-06-05
## 0.2.1 - 2019-06-05
* Add license files * Add license files
## [0.2.0] - 2019-05-12
## 0.2.0 - 2019-05-12
* Update awc and actix-http deps * Update awc and actix-http deps
## [0.1.1] - 2019-04-24
## 0.1.1 - 2019-04-24
* Always make new connection for http client * Always make new connection for http client
## [0.1.0] - 2019-04-16 ## 0.1.0 - 2019-04-16
* No changes * No changes
## [0.1.0-alpha.3] - 2019-04-02 ## 0.1.0-alpha.3 - 2019-04-02
* Request functions accept path #743 * Request functions accept path #743
## [0.1.0-alpha.2] - 2019-03-29 ## 0.1.0-alpha.2 - 2019-03-29
* Added TestServerRuntime::load_body() method * Added TestServerRuntime::load_body() method
* Update actix-http and awc libraries * Update actix-http and awc libraries
## [0.1.0-alpha.1] - 2019-03-28 ## 0.1.0-alpha.1 - 2019-03-28
* Initial impl * Initial impl

View File

@ -1,8 +1,8 @@
[package] [package]
name = "actix-http-test" name = "actix-http-test"
version = "2.0.0-alpha.1" version = "2.1.0"
authors = ["Nikolay Kim <fafhrd91@gmail.com>"] authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
description = "Actix HTTP test server" description = "Various helpers for Actix applications to use during testing"
readme = "README.md" readme = "README.md"
keywords = ["http", "web", "framework", "async", "futures"] keywords = ["http", "web", "framework", "async", "futures"]
homepage = "https://actix.rs" homepage = "https://actix.rs"
@ -29,16 +29,16 @@ default = []
openssl = ["open-ssl", "awc/openssl"] openssl = ["open-ssl", "awc/openssl"]
[dependencies] [dependencies]
actix-service = "1.0.1" actix-service = "1.0.6"
actix-codec = "0.2.0" actix-codec = "0.3.0"
actix-connect = "2.0.0-alpha.2" actix-connect = "2.0.0"
actix-utils = "1.0.3" actix-utils = "2.0.0"
actix-rt = "1.0.0" actix-rt = "1.1.1"
actix-server = "1.0.0" actix-server = "1.0.0"
actix-testing = "1.0.0" actix-testing = "1.0.0"
awc = "2.0.0-alpha.2" awc = "2.0.0"
base64 = "0.12" base64 = "0.13"
bytes = "0.5.3" bytes = "0.5.3"
futures-core = { version = "0.3.5", default-features = false } futures-core = { version = "0.3.5", default-features = false }
http = "0.2.0" http = "0.2.0"
@ -47,10 +47,10 @@ socket2 = "0.3"
serde = "1.0" serde = "1.0"
serde_json = "1.0" serde_json = "1.0"
slab = "0.4" slab = "0.4"
serde_urlencoded = "0.6.1" serde_urlencoded = "0.7"
time = { version = "0.2.7", default-features = false, features = ["std"] } time = { version = "0.2.7", default-features = false, features = ["std"] }
open-ssl = { version = "0.10", package = "openssl", optional = true } open-ssl = { version = "0.10", package = "openssl", optional = true }
[dev-dependencies] [dev-dependencies]
actix-web = "3.0.0-alpha.3" actix-web = "3.0.0"
actix-http = "2.0.0-beta.1" actix-http = "2.0.0"

15
actix-http-test/README.md Normal file
View File

@ -0,0 +1,15 @@
# actix-http-test
> Various helpers for Actix applications to use during testing.
[![crates.io](https://img.shields.io/crates/v/actix-http-test?label=latest)](https://crates.io/crates/actix-http-test)
[![Documentation](https://docs.rs/actix-http-test/badge.svg?version=2.1.0)](https://docs.rs/actix-http-test/2.1.0)
![Apache 2.0 or MIT licensed](https://img.shields.io/crates/l/actix-http-test)
[![Dependency Status](https://deps.rs/crate/actix-http-test/2.1.0/status.svg)](https://deps.rs/crate/actix-http-test/2.1.0)
[![Join the chat at https://gitter.im/actix/actix-web](https://badges.gitter.im/actix/actix-web.svg)](https://gitter.im/actix/actix-web?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
## Documentation & Resources
- [API Documentation](https://docs.rs/actix-http-test)
- [Chat on Gitter](https://gitter.im/actix/actix-web)
- Minimum Supported Rust Version (MSRV): 1.42.0

View File

@ -1,4 +1,9 @@
//! Various helpers for Actix applications to use during testing. //! Various helpers for Actix applications to use during testing.
#![deny(rust_2018_idioms)]
#![doc(html_logo_url = "https://actix.rs/img/logo.png")]
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
use std::sync::mpsc; use std::sync::mpsc;
use std::{net, thread, time}; use std::{net, thread, time};
@ -44,12 +49,20 @@ pub use actix_testing::*;
/// } /// }
/// ``` /// ```
pub async fn test_server<F: ServiceFactory<TcpStream>>(factory: F) -> TestServer { pub async fn test_server<F: ServiceFactory<TcpStream>>(factory: F) -> TestServer {
let tcp = net::TcpListener::bind("127.0.0.1:0").unwrap();
test_server_with_addr(tcp, factory).await
}
/// Start [`test server`](test_server()) on a concrete Address
pub async fn test_server_with_addr<F: ServiceFactory<TcpStream>>(
tcp: net::TcpListener,
factory: F,
) -> TestServer {
let (tx, rx) = mpsc::channel(); let (tx, rx) = mpsc::channel();
// run server in separate thread // run server in separate thread
thread::spawn(move || { thread::spawn(move || {
let sys = System::new("actix-test-server"); let sys = System::new("actix-test-server");
let tcp = net::TcpListener::bind("127.0.0.1:0").unwrap();
let local_addr = tcp.local_addr().unwrap(); let local_addr = tcp.local_addr().unwrap();
Server::build() Server::build()
@ -90,7 +103,7 @@ pub async fn test_server<F: ServiceFactory<TcpStream>>(factory: F) -> TestServer
} }
}; };
Client::build().connector(connector).finish() Client::builder().connector(connector).finish()
}; };
actix_connect::start_default_resolver().await.unwrap(); actix_connect::start_default_resolver().await.unwrap();

View File

@ -1,9 +1,62 @@
# Changes # Changes
## Unreleased - 2020-xx-xx ## Unreleased - 2020-xx-xx
### Changed
* Bumped `rand` to `0.8`
## 2.2.0 - 2020-11-25
### Added
* HttpResponse builders for 1xx status codes. [#1768]
* `Accept::mime_precedence` and `Accept::mime_preference`. [#1793]
* `TryFrom<u16>` and `TryFrom<f32>` for `http::header::Quality`. [#1797]
### Fixed
* Started dropping `transfer-encoding: chunked` and `Content-Length` for 1XX and 204 responses. [#1767]
### Changed
* Upgrade `serde_urlencoded` to `0.7`. [#1773]
[#1773]: https://github.com/actix/actix-web/pull/1773
[#1767]: https://github.com/actix/actix-web/pull/1767
[#1768]: https://github.com/actix/actix-web/pull/1768
[#1793]: https://github.com/actix/actix-web/pull/1793
[#1797]: https://github.com/actix/actix-web/pull/1797
## 2.0.0-beta.2 - 2020-07-21 ## 2.1.0 - 2020-10-30
### Added
* Added more flexible `on_connect_ext` methods for on-connect handling. [#1754]
### Changed
* Upgrade `base64` to `0.13`. [#1744]
* Upgrade `pin-project` to `1.0`. [#1733]
* Deprecate `ResponseBuilder::{if_some, if_true}`. [#1760]
[#1760]: https://github.com/actix/actix-web/pull/1760
[#1754]: https://github.com/actix/actix-web/pull/1754
[#1733]: https://github.com/actix/actix-web/pull/1733
[#1744]: https://github.com/actix/actix-web/pull/1744
## 2.0.0 - 2020-09-11
* No significant changes from `2.0.0-beta.4`.
## 2.0.0-beta.4 - 2020-09-09
### Changed
* Update actix-codec and actix-utils dependencies.
* Update actix-connect and actix-tls dependencies.
## [2.0.0-beta.3] - 2020-08-14
### Fixed
* Memory leak of `client::pool::ConnectorPoolSupport`. [#1626]
[#1626]: https://github.com/actix/actix-web/pull/1626
## [2.0.0-beta.2] - 2020-07-21
### Fixed ### Fixed
* Potential UB in h1 decoder using uninitialized memory. [#1614] * Potential UB in h1 decoder using uninitialized memory. [#1614]

View File

@ -1,46 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at fafhrd91@gmail.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@ -1,8 +1,8 @@
[package] [package]
name = "actix-http" name = "actix-http"
version = "2.0.0-beta.2" version = "2.2.0"
authors = ["Nikolay Kim <fafhrd91@gmail.com>"] authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
description = "Actix HTTP primitives" description = "HTTP primitives for the Actix ecosystem"
readme = "README.md" readme = "README.md"
keywords = ["actix", "http", "framework", "async", "futures"] keywords = ["actix", "http", "framework", "async", "futures"]
homepage = "https://actix.rs" homepage = "https://actix.rs"
@ -40,16 +40,16 @@ secure-cookies = ["cookie/secure"]
actors = ["actix"] actors = ["actix"]
[dependencies] [dependencies]
actix-service = "1.0.5" actix-service = "1.0.6"
actix-codec = "0.2.0" actix-codec = "0.3.0"
actix-connect = "2.0.0-alpha.3" actix-connect = "2.0.0"
actix-utils = "1.0.6" actix-utils = "2.0.0"
actix-rt = "1.0.0" actix-rt = "1.0.0"
actix-threadpool = "0.3.1" actix-threadpool = "0.3.1"
actix-tls = { version = "2.0.0-alpha.1", optional = true } actix-tls = { version = "2.0.0", optional = true }
actix = { version = "0.10.0-alpha.1", optional = true } actix = { version = "0.10.0", optional = true }
base64 = "0.12" base64 = "0.13"
bitflags = "1.2" bitflags = "1.2"
bytes = "0.5.3" bytes = "0.5.3"
cookie = { version = "0.14.1", features = ["percent-encode"] } cookie = { version = "0.14.1", features = ["percent-encode"] }
@ -71,14 +71,14 @@ language-tags = "0.2"
log = "0.4" log = "0.4"
mime = "0.3" mime = "0.3"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-project = "0.4.17" pin-project = "1.0.0"
rand = "0.7" rand = "0.8"
regex = "1.3" regex = "1.3"
serde = "1.0" serde = "1.0"
serde_json = "1.0" serde_json = "1.0"
sha-1 = "0.9" sha-1 = "0.9"
slab = "0.4" slab = "0.4"
serde_urlencoded = "0.6.1" serde_urlencoded = "0.7"
time = { version = "0.2.7", default-features = false, features = ["std"] } time = { version = "0.2.7", default-features = false, features = ["std"] }
# compression # compression
@ -87,14 +87,14 @@ flate2 = { version = "1.0.13", optional = true }
[dev-dependencies] [dev-dependencies]
actix-server = "1.0.1" actix-server = "1.0.1"
actix-connect = { version = "2.0.0-alpha.2", features = ["openssl"] } actix-connect = { version = "2.0.0", features = ["openssl"] }
actix-http-test = { version = "2.0.0-alpha.1", features = ["openssl"] } actix-http-test = { version = "2.0.0", features = ["openssl"] }
actix-tls = { version = "2.0.0-alpha.1", features = ["openssl"] } actix-tls = { version = "2.0.0", features = ["openssl"] }
criterion = "0.3" criterion = "0.3"
env_logger = "0.7" env_logger = "0.7"
serde_derive = "1.0" serde_derive = "1.0"
open-ssl = { version="0.10", package = "openssl" } open-ssl = { version="0.10", package = "openssl" }
rust-tls = { version="0.17", package = "rustls" } rust-tls = { version="0.18", package = "rustls" }
[[bench]] [[bench]]
name = "content-length" name = "content-length"

View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017-NOW Nikolay Kim
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

1
actix-http/LICENSE-APACHE Symbolic link
View File

@ -0,0 +1 @@
../LICENSE-APACHE

View File

@ -1,25 +0,0 @@
Copyright (c) 2017 Nikolay Kim
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

1
actix-http/LICENSE-MIT Symbolic link
View File

@ -0,0 +1 @@
../LICENSE-MIT

View File

@ -1,24 +1,27 @@
# Actix http [![Build Status](https://travis-ci.org/actix/actix-web.svg?branch=master)](https://travis-ci.org/actix/actix-web) [![codecov](https://codecov.io/gh/actix/actix-web/branch/master/graph/badge.svg)](https://codecov.io/gh/actix/actix-web) [![crates.io](https://meritbadge.herokuapp.com/actix-http)](https://crates.io/crates/actix-http) [![Join the chat at https://gitter.im/actix/actix](https://badges.gitter.im/actix/actix.svg)](https://gitter.im/actix/actix?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # actix-http
Actix http > HTTP primitives for the Actix ecosystem.
## Documentation & community resources [![crates.io](https://img.shields.io/crates/v/actix-http?label=latest)](https://crates.io/crates/actix-http)
[![Documentation](https://docs.rs/actix-http/badge.svg?version=2.2.0)](https://docs.rs/actix-http/2.2.0)
![Apache 2.0 or MIT licensed](https://img.shields.io/crates/l/actix-http)
[![Dependency Status](https://deps.rs/crate/actix-http/2.2.0/status.svg)](https://deps.rs/crate/actix-http/2.2.0)
[![Join the chat at https://gitter.im/actix/actix-web](https://badges.gitter.im/actix/actix-web.svg)](https://gitter.im/actix/actix-web?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
* [User Guide](https://actix.rs/docs/) ## Documentation & Resources
* [API Documentation](https://docs.rs/actix-http/)
* [Chat on gitter](https://gitter.im/actix/actix) - [API Documentation](https://docs.rs/actix-http)
* Cargo package: [actix-http](https://crates.io/crates/actix-http) - [Chat on Gitter](https://gitter.im/actix/actix-web)
* Minimum supported Rust version: 1.40 or later - Minimum Supported Rust Version (MSRV): 1.42.0
## Example ## Example
```rust ```rust
// see examples/framed_hello.rs for complete list of used crates.
use std::{env, io}; use std::{env, io};
use actix_http::{HttpService, Response}; use actix_http::{HttpService, Response};
use actix_server::Server; use actix_server::Server;
use futures::future; use futures_util::future;
use http::header::HeaderValue; use http::header::HeaderValue;
use log::info; use log::info;

View File

@ -714,7 +714,7 @@ mod tests {
let body = resp_body.downcast_ref::<String>().unwrap(); let body = resp_body.downcast_ref::<String>().unwrap();
assert_eq!(body, "hello cast"); assert_eq!(body, "hello cast");
let body = &mut resp_body.downcast_mut::<String>().unwrap(); let body = &mut resp_body.downcast_mut::<String>().unwrap();
body.push_str("!"); body.push('!');
let body = resp_body.downcast_ref::<String>().unwrap(); let body = resp_body.downcast_ref::<String>().unwrap();
assert_eq!(body, "hello cast!"); assert_eq!(body, "hello cast!");
let not_body = resp_body.downcast_ref::<()>(); let not_body = resp_body.downcast_ref::<()>();

View File

@ -14,10 +14,11 @@ use crate::helpers::{Data, DataFactory};
use crate::request::Request; use crate::request::Request;
use crate::response::Response; use crate::response::Response;
use crate::service::HttpService; use crate::service::HttpService;
use crate::{ConnectCallback, Extensions};
/// A http service builder /// A HTTP service builder
/// ///
/// This type can be used to construct an instance of `http service` through a /// This type can be used to construct an instance of [`HttpService`] through a
/// builder-like pattern. /// builder-like pattern.
pub struct HttpServiceBuilder<T, S, X = ExpectHandler, U = UpgradeHandler<T>> { pub struct HttpServiceBuilder<T, S, X = ExpectHandler, U = UpgradeHandler<T>> {
keep_alive: KeepAlive, keep_alive: KeepAlive,
@ -27,7 +28,9 @@ pub struct HttpServiceBuilder<T, S, X = ExpectHandler, U = UpgradeHandler<T>> {
local_addr: Option<net::SocketAddr>, local_addr: Option<net::SocketAddr>,
expect: X, expect: X,
upgrade: Option<U>, upgrade: Option<U>,
// DEPRECATED: in favor of on_connect_ext
on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
_t: PhantomData<(T, S)>, _t: PhantomData<(T, S)>,
} }
@ -49,6 +52,7 @@ where
expect: ExpectHandler, expect: ExpectHandler,
upgrade: None, upgrade: None,
on_connect: None, on_connect: None,
on_connect_ext: None,
_t: PhantomData, _t: PhantomData,
} }
} }
@ -138,6 +142,7 @@ where
expect: expect.into_factory(), expect: expect.into_factory(),
upgrade: self.upgrade, upgrade: self.upgrade,
on_connect: self.on_connect, on_connect: self.on_connect,
on_connect_ext: self.on_connect_ext,
_t: PhantomData, _t: PhantomData,
} }
} }
@ -167,14 +172,16 @@ where
expect: self.expect, expect: self.expect,
upgrade: Some(upgrade.into_factory()), upgrade: Some(upgrade.into_factory()),
on_connect: self.on_connect, on_connect: self.on_connect,
on_connect_ext: self.on_connect_ext,
_t: PhantomData, _t: PhantomData,
} }
} }
/// Set on-connect callback. /// Set on-connect callback.
/// ///
/// It get called once per connection and result of the call /// Called once per connection. Return value of the call is stored in request extensions.
/// get stored to the request's extensions. ///
/// *SOFT DEPRECATED*: Prefer the `on_connect_ext` style callback.
pub fn on_connect<F, I>(mut self, f: F) -> Self pub fn on_connect<F, I>(mut self, f: F) -> Self
where where
F: Fn(&T) -> I + 'static, F: Fn(&T) -> I + 'static,
@ -184,7 +191,20 @@ where
self self
} }
/// Finish service configuration and create *http service* for HTTP/1 protocol. /// Sets the callback to be run on connection establishment.
///
/// Has mutable access to a data container that will be merged into request extensions.
/// This enables transport layer data (like client certificates) to be accessed in middleware
/// and handlers.
pub fn on_connect_ext<F>(mut self, f: F) -> Self
where
F: Fn(&T, &mut Extensions) + 'static,
{
self.on_connect_ext = Some(Rc::new(f));
self
}
/// Finish service configuration and create a HTTP Service for HTTP/1 protocol.
pub fn h1<F, B>(self, service: F) -> H1Service<T, S, B, X, U> pub fn h1<F, B>(self, service: F) -> H1Service<T, S, B, X, U>
where where
B: MessageBody, B: MessageBody,
@ -200,13 +220,15 @@ where
self.secure, self.secure,
self.local_addr, self.local_addr,
); );
H1Service::with_config(cfg, service.into_factory()) H1Service::with_config(cfg, service.into_factory())
.expect(self.expect) .expect(self.expect)
.upgrade(self.upgrade) .upgrade(self.upgrade)
.on_connect(self.on_connect) .on_connect(self.on_connect)
.on_connect_ext(self.on_connect_ext)
} }
/// Finish service configuration and create *http service* for HTTP/2 protocol. /// Finish service configuration and create a HTTP service for HTTP/2 protocol.
pub fn h2<F, B>(self, service: F) -> H2Service<T, S, B> pub fn h2<F, B>(self, service: F) -> H2Service<T, S, B>
where where
B: MessageBody + 'static, B: MessageBody + 'static,
@ -223,7 +245,10 @@ where
self.secure, self.secure,
self.local_addr, self.local_addr,
); );
H2Service::with_config(cfg, service.into_factory()).on_connect(self.on_connect)
H2Service::with_config(cfg, service.into_factory())
.on_connect(self.on_connect)
.on_connect_ext(self.on_connect_ext)
} }
/// Finish service configuration and create `HttpService` instance. /// Finish service configuration and create `HttpService` instance.
@ -243,9 +268,11 @@ where
self.secure, self.secure,
self.local_addr, self.local_addr,
); );
HttpService::with_config(cfg, service.into_factory()) HttpService::with_config(cfg, service.into_factory())
.expect(self.expect) .expect(self.expect)
.upgrade(self.upgrade) .upgrade(self.upgrade)
.on_connect(self.on_connect) .on_connect(self.on_connect)
.on_connect_ext(self.on_connect_ext)
} }
} }

View File

@ -46,10 +46,10 @@ pub trait Connection {
pub(crate) trait ConnectionLifetime: AsyncRead + AsyncWrite + 'static { pub(crate) trait ConnectionLifetime: AsyncRead + AsyncWrite + 'static {
/// Close connection /// Close connection
fn close(&mut self); fn close(self: Pin<&mut Self>);
/// Release connection to the connection pool /// Release connection to the connection pool
fn release(&mut self); fn release(self: Pin<&mut Self>);
} }
#[doc(hidden)] #[doc(hidden)]
@ -195,11 +195,15 @@ where
match self { match self {
EitherConnection::A(con) => con EitherConnection::A(con) => con
.open_tunnel(head) .open_tunnel(head)
.map(|res| res.map(|(head, framed)| (head, framed.map_io(EitherIo::A)))) .map(|res| {
res.map(|(head, framed)| (head, framed.into_map_io(EitherIo::A)))
})
.boxed_local(), .boxed_local(),
EitherConnection::B(con) => con EitherConnection::B(con) => con
.open_tunnel(head) .open_tunnel(head)
.map(|res| res.map(|(head, framed)| (head, framed.map_io(EitherIo::B)))) .map(|res| {
res.map(|(head, framed)| (head, framed.into_map_io(EitherIo::B)))
})
.boxed_local(), .boxed_local(),
} }
} }

View File

@ -67,17 +67,17 @@ where
}; };
// create Framed and send request // create Framed and send request
let mut framed = Framed::new(io, h1::ClientCodec::default()); let mut framed_inner = Framed::new(io, h1::ClientCodec::default());
framed.send((head, body.size()).into()).await?; framed_inner.send((head, body.size()).into()).await?;
// send request body // send request body
match body.size() { match body.size() {
BodySize::None | BodySize::Empty | BodySize::Sized(0) => (), BodySize::None | BodySize::Empty | BodySize::Sized(0) => (),
_ => send_body(body, &mut framed).await?, _ => send_body(body, Pin::new(&mut framed_inner)).await?,
}; };
// read response and init read body // read response and init read body
let res = framed.into_future().await; let res = Pin::new(&mut framed_inner).into_future().await;
let (head, framed) = if let (Some(result), framed) = res { let (head, framed) = if let (Some(result), framed) = res {
let item = result.map_err(SendRequestError::from)?; let item = result.map_err(SendRequestError::from)?;
(item, framed) (item, framed)
@ -85,14 +85,14 @@ where
return Err(SendRequestError::from(ConnectError::Disconnected)); return Err(SendRequestError::from(ConnectError::Disconnected));
}; };
match framed.get_codec().message_type() { match framed.codec_ref().message_type() {
h1::MessageType::None => { h1::MessageType::None => {
let force_close = !framed.get_codec().keepalive(); let force_close = !framed.codec_ref().keepalive();
release_connection(framed, force_close); release_connection(framed, force_close);
Ok((head, Payload::None)) Ok((head, Payload::None))
} }
_ => { _ => {
let pl: PayloadStream = PlStream::new(framed).boxed_local(); let pl: PayloadStream = PlStream::new(framed_inner).boxed_local();
Ok((head, pl.into())) Ok((head, pl.into()))
} }
} }
@ -119,35 +119,36 @@ where
} }
/// send request body to the peer /// send request body to the peer
pub(crate) async fn send_body<I, B>( pub(crate) async fn send_body<T, B>(
body: B, body: B,
framed: &mut Framed<I, h1::ClientCodec>, mut framed: Pin<&mut Framed<T, h1::ClientCodec>>,
) -> Result<(), SendRequestError> ) -> Result<(), SendRequestError>
where where
I: ConnectionLifetime, T: ConnectionLifetime + Unpin,
B: MessageBody, B: MessageBody,
{ {
let mut eof = false;
pin_mut!(body); pin_mut!(body);
let mut eof = false;
while !eof { while !eof {
while !eof && !framed.is_write_buf_full() { while !eof && !framed.as_ref().is_write_buf_full() {
match poll_fn(|cx| body.as_mut().poll_next(cx)).await { match poll_fn(|cx| body.as_mut().poll_next(cx)).await {
Some(result) => { Some(result) => {
framed.write(h1::Message::Chunk(Some(result?)))?; framed.as_mut().write(h1::Message::Chunk(Some(result?)))?;
} }
None => { None => {
eof = true; eof = true;
framed.write(h1::Message::Chunk(None))?; framed.as_mut().write(h1::Message::Chunk(None))?;
} }
} }
} }
if !framed.is_write_buf_empty() { if !framed.as_ref().is_write_buf_empty() {
poll_fn(|cx| match framed.flush(cx) { poll_fn(|cx| match framed.as_mut().flush(cx) {
Poll::Ready(Ok(_)) => Poll::Ready(Ok(())), Poll::Ready(Ok(_)) => Poll::Ready(Ok(())),
Poll::Ready(Err(err)) => Poll::Ready(Err(err)), Poll::Ready(Err(err)) => Poll::Ready(Err(err)),
Poll::Pending => { Poll::Pending => {
if !framed.is_write_buf_full() { if !framed.as_ref().is_write_buf_full() {
Poll::Ready(Ok(())) Poll::Ready(Ok(()))
} else { } else {
Poll::Pending Poll::Pending
@ -158,13 +159,14 @@ where
} }
} }
SinkExt::flush(framed).await?; SinkExt::flush(Pin::into_inner(framed)).await?;
Ok(()) Ok(())
} }
#[doc(hidden)] #[doc(hidden)]
/// HTTP client connection /// HTTP client connection
pub struct H1Connection<T> { pub struct H1Connection<T> {
/// T should be `Unpin`
io: Option<T>, io: Option<T>,
created: time::Instant, created: time::Instant,
pool: Option<Acquired<T>>, pool: Option<Acquired<T>>,
@ -175,7 +177,7 @@ where
T: AsyncRead + AsyncWrite + Unpin + 'static, T: AsyncRead + AsyncWrite + Unpin + 'static,
{ {
/// Close connection /// Close connection
fn close(&mut self) { fn close(mut self: Pin<&mut Self>) {
if let Some(mut pool) = self.pool.take() { if let Some(mut pool) = self.pool.take() {
if let Some(io) = self.io.take() { if let Some(io) = self.io.take() {
pool.close(IoConnection::new( pool.close(IoConnection::new(
@ -188,7 +190,7 @@ where
} }
/// Release this connection to the connection pool /// Release this connection to the connection pool
fn release(&mut self) { fn release(mut self: Pin<&mut Self>) {
if let Some(mut pool) = self.pool.take() { if let Some(mut pool) = self.pool.take() {
if let Some(io) = self.io.take() { if let Some(io) = self.io.take() {
pool.release(IoConnection::new( pool.release(IoConnection::new(
@ -242,14 +244,18 @@ impl<T: AsyncRead + AsyncWrite + Unpin + 'static> AsyncWrite for H1Connection<T>
} }
} }
#[pin_project::pin_project]
pub(crate) struct PlStream<Io> { pub(crate) struct PlStream<Io> {
#[pin]
framed: Option<Framed<Io, h1::ClientPayloadCodec>>, framed: Option<Framed<Io, h1::ClientPayloadCodec>>,
} }
impl<Io: ConnectionLifetime> PlStream<Io> { impl<Io: ConnectionLifetime> PlStream<Io> {
fn new(framed: Framed<Io, h1::ClientCodec>) -> Self { fn new(framed: Framed<Io, h1::ClientCodec>) -> Self {
let framed = framed.into_map_codec(|codec| codec.into_payload_codec());
PlStream { PlStream {
framed: Some(framed.map_codec(|codec| codec.into_payload_codec())), framed: Some(framed),
} }
} }
} }
@ -261,16 +267,16 @@ impl<Io: ConnectionLifetime> Stream for PlStream<Io> {
self: Pin<&mut Self>, self: Pin<&mut Self>,
cx: &mut Context<'_>, cx: &mut Context<'_>,
) -> Poll<Option<Self::Item>> { ) -> Poll<Option<Self::Item>> {
let this = self.get_mut(); let mut this = self.project();
match this.framed.as_mut().unwrap().next_item(cx)? { match this.framed.as_mut().as_pin_mut().unwrap().next_item(cx)? {
Poll::Pending => Poll::Pending, Poll::Pending => Poll::Pending,
Poll::Ready(Some(chunk)) => { Poll::Ready(Some(chunk)) => {
if let Some(chunk) = chunk { if let Some(chunk) = chunk {
Poll::Ready(Some(Ok(chunk))) Poll::Ready(Some(Ok(chunk)))
} else { } else {
let framed = this.framed.take().unwrap(); let framed = this.framed.as_mut().as_pin_mut().unwrap();
let force_close = !framed.get_codec().keepalive(); let force_close = !framed.codec_ref().keepalive();
release_connection(framed, force_close); release_connection(framed, force_close);
Poll::Ready(None) Poll::Ready(None)
} }
@ -280,14 +286,13 @@ impl<Io: ConnectionLifetime> Stream for PlStream<Io> {
} }
} }
fn release_connection<T, U>(framed: Framed<T, U>, force_close: bool) fn release_connection<T, U>(framed: Pin<&mut Framed<T, U>>, force_close: bool)
where where
T: ConnectionLifetime, T: ConnectionLifetime,
{ {
let mut parts = framed.into_parts(); if !force_close && framed.is_read_buf_empty() && framed.is_write_buf_empty() {
if !force_close && parts.read_buf.is_empty() && parts.write_buf.is_empty() { framed.io_pin().release()
parts.io.release()
} else { } else {
parts.io.close() framed.io_pin().close()
} }
} }

View File

@ -2,15 +2,16 @@ use std::cell::RefCell;
use std::collections::VecDeque; use std::collections::VecDeque;
use std::future::Future; use std::future::Future;
use std::pin::Pin; use std::pin::Pin;
use std::rc::{Rc, Weak}; use std::rc::Rc;
use std::task::{Context, Poll}; use std::task::{Context, Poll};
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use actix_codec::{AsyncRead, AsyncWrite}; use actix_codec::{AsyncRead, AsyncWrite};
use actix_rt::time::{delay_for, Delay}; use actix_rt::time::{delay_for, Delay};
use actix_service::Service; use actix_service::Service;
use actix_utils::{oneshot, task::LocalWaker}; use actix_utils::task::LocalWaker;
use bytes::Bytes; use bytes::Bytes;
use futures_channel::oneshot;
use futures_util::future::{poll_fn, FutureExt, LocalBoxFuture}; use futures_util::future::{poll_fn, FutureExt, LocalBoxFuture};
use fxhash::FxHashMap; use fxhash::FxHashMap;
use h2::client::{Connection, SendRequest}; use h2::client::{Connection, SendRequest};
@ -65,8 +66,8 @@ where
// start support future // start support future
actix_rt::spawn(ConnectorPoolSupport { actix_rt::spawn(ConnectorPoolSupport {
connector: connector_rc.clone(), connector: Rc::clone(&connector_rc),
inner: Rc::downgrade(&inner_rc), inner: Rc::clone(&inner_rc),
}); });
ConnectionPool(connector_rc, inner_rc) ConnectionPool(connector_rc, inner_rc)
@ -82,6 +83,13 @@ where
} }
} }
impl<T, Io> Drop for ConnectionPool<T, Io> {
fn drop(&mut self) {
// wake up the ConnectorPoolSupport when dropping so it can exit properly.
self.1.borrow().waker.wake();
}
}
impl<T, Io> Service for ConnectionPool<T, Io> impl<T, Io> Service for ConnectionPool<T, Io>
where where
Io: AsyncRead + AsyncWrite + Unpin + 'static, Io: AsyncRead + AsyncWrite + Unpin + 'static,
@ -112,11 +120,11 @@ where
match poll_fn(|cx| Poll::Ready(inner.borrow_mut().acquire(&key, cx))).await { match poll_fn(|cx| Poll::Ready(inner.borrow_mut().acquire(&key, cx))).await {
Acquire::Acquired(io, created) => { Acquire::Acquired(io, created) => {
// use existing connection // use existing connection
return Ok(IoConnection::new( Ok(IoConnection::new(
io, io,
created, created,
Some(Acquired(key, Some(inner))), Some(Acquired(key, Some(inner))),
)); ))
} }
Acquire::Available => { Acquire::Available => {
// open tcp connection // open tcp connection
@ -421,7 +429,7 @@ where
Io: AsyncRead + AsyncWrite + Unpin + 'static, Io: AsyncRead + AsyncWrite + Unpin + 'static,
{ {
connector: T, connector: T,
inner: Weak<RefCell<Inner<Io>>>, inner: Rc<RefCell<Inner<Io>>>,
} }
impl<T, Io> Future for ConnectorPoolSupport<T, Io> impl<T, Io> Future for ConnectorPoolSupport<T, Io>
@ -435,55 +443,57 @@ where
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> { fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.project(); let this = self.project();
if let Some(this_inner) = this.inner.upgrade() { if Rc::strong_count(this.inner) == 1 {
let mut inner = this_inner.as_ref().borrow_mut(); // If we are last copy of Inner<Io> it means the ConnectionPool is already gone
inner.waker.register(cx.waker()); // and we are safe to exit.
return Poll::Ready(());
}
// check waiters let mut inner = this.inner.borrow_mut();
loop { inner.waker.register(cx.waker());
let (key, token) = {
if let Some((key, token)) = inner.waiters_queue.get_index(0) {
(key.clone(), *token)
} else {
break;
}
};
if inner.waiters.get(token).unwrap().is_none() {
continue;
}
match inner.acquire(&key, cx) { // check waiters
Acquire::NotAvailable => break, loop {
Acquire::Acquired(io, created) => { let (key, token) = {
let tx = inner.waiters.get_mut(token).unwrap().take().unwrap().1; if let Some((key, token)) = inner.waiters_queue.get_index(0) {
if let Err(conn) = tx.send(Ok(IoConnection::new( (key.clone(), *token)
io, } else {
created, break;
Some(Acquired(key.clone(), Some(this_inner.clone()))),
))) {
let (io, created) = conn.unwrap().into_inner();
inner.release_conn(&key, io, created);
}
}
Acquire::Available => {
let (connect, tx) =
inner.waiters.get_mut(token).unwrap().take().unwrap();
OpenWaitingConnection::spawn(
key.clone(),
tx,
this_inner.clone(),
this.connector.call(connect),
inner.config.clone(),
);
}
} }
let _ = inner.waiters_queue.swap_remove_index(0); };
if inner.waiters.get(token).unwrap().is_none() {
continue;
} }
Poll::Pending match inner.acquire(&key, cx) {
} else { Acquire::NotAvailable => break,
Poll::Ready(()) Acquire::Acquired(io, created) => {
let tx = inner.waiters.get_mut(token).unwrap().take().unwrap().1;
if let Err(conn) = tx.send(Ok(IoConnection::new(
io,
created,
Some(Acquired(key.clone(), Some(this.inner.clone()))),
))) {
let (io, created) = conn.unwrap().into_inner();
inner.release_conn(&key, io, created);
}
}
Acquire::Available => {
let (connect, tx) =
inner.waiters.get_mut(token).unwrap().take().unwrap();
OpenWaitingConnection::spawn(
key.clone(),
tx,
this.inner.clone(),
this.connector.call(connect),
inner.config.clone(),
);
}
}
let _ = inner.waiters_queue.swap_remove_index(0);
} }
Poll::Pending
} }
} }

View File

@ -4,12 +4,12 @@ use std::task::{Context, Poll};
use actix_service::Service; use actix_service::Service;
#[doc(hidden)]
/// Service that allows to turn non-clone service to a service with `Clone` impl /// Service that allows to turn non-clone service to a service with `Clone` impl
/// ///
/// # Panics /// # Panics
/// CloneableService might panic with some creative use of thread local storage. /// CloneableService might panic with some creative use of thread local storage.
/// See https://github.com/actix/actix-web/issues/1295 for example /// See https://github.com/actix/actix-web/issues/1295 for example
#[doc(hidden)]
pub(crate) struct CloneableService<T: Service>(Rc<RefCell<T>>); pub(crate) struct CloneableService<T: Service>(Rc<RefCell<T>>);
impl<T: Service> CloneableService<T> { impl<T: Service> CloneableService<T> {

View File

@ -17,7 +17,7 @@ const DATE_VALUE_LENGTH: usize = 29;
pub enum KeepAlive { pub enum KeepAlive {
/// Keep alive in seconds /// Keep alive in seconds
Timeout(usize), Timeout(usize),
/// Relay on OS to shutdown tcp connection /// Rely on OS to shutdown tcp connection
Os, Os,
/// Disabled /// Disabled
Disabled, Disabled,
@ -209,6 +209,7 @@ impl Date {
date.update(); date.update();
date date
} }
fn update(&mut self) { fn update(&mut self) {
self.pos = 0; self.pos = 0;
write!( write!(

View File

@ -1,4 +1,5 @@
//! Error and Result module //! Error and Result module
use std::cell::RefCell; use std::cell::RefCell;
use std::io::Write; use std::io::Write;
use std::str::Utf8Error; use std::str::Utf8Error;
@ -7,7 +8,7 @@ use std::{fmt, io, result};
use actix_codec::{Decoder, Encoder}; use actix_codec::{Decoder, Encoder};
pub use actix_threadpool::BlockingError; pub use actix_threadpool::BlockingError;
use actix_utils::framed::DispatcherError as FramedDispatcherError; use actix_utils::dispatcher::DispatcherError as FramedDispatcherError;
use actix_utils::timeout::TimeoutError; use actix_utils::timeout::TimeoutError;
use bytes::BytesMut; use bytes::BytesMut;
use derive_more::{Display, From}; use derive_more::{Display, From};
@ -24,7 +25,7 @@ pub use crate::cookie::ParseError as CookieParseError;
use crate::helpers::Writer; use crate::helpers::Writer;
use crate::response::{Response, ResponseBuilder}; use crate::response::{Response, ResponseBuilder};
/// A specialized [`Result`](https://doc.rust-lang.org/std/result/enum.Result.html) /// A specialized [`std::result::Result`]
/// for actix web operations /// for actix web operations
/// ///
/// This typedef is generally used to avoid writing out /// This typedef is generally used to avoid writing out
@ -452,10 +453,10 @@ impl ResponseError for ContentTypeError {
} }
} }
impl<E, U: Encoder + Decoder> ResponseError for FramedDispatcherError<E, U> impl<E, U: Encoder<I> + Decoder, I> ResponseError for FramedDispatcherError<E, U, I>
where where
E: fmt::Debug + fmt::Display, E: fmt::Debug + fmt::Display,
<U as Encoder>::Error: fmt::Debug, <U as Encoder<I>>::Error: fmt::Debug,
<U as Decoder>::Error: fmt::Debug, <U as Decoder>::Error: fmt::Debug,
{ {
} }

View File

@ -1,10 +1,10 @@
use std::any::{Any, TypeId}; use std::any::{Any, TypeId};
use std::fmt; use std::{fmt, mem};
use fxhash::FxHashMap; use fxhash::FxHashMap;
#[derive(Default)]
/// A type map of request extensions. /// A type map of request extensions.
#[derive(Default)]
pub struct Extensions { pub struct Extensions {
/// Use FxHasher with a std HashMap with for faster /// Use FxHasher with a std HashMap with for faster
/// lookups on the small `TypeId` (u64 equivalent) keys. /// lookups on the small `TypeId` (u64 equivalent) keys.
@ -61,6 +61,16 @@ impl Extensions {
pub fn clear(&mut self) { pub fn clear(&mut self) {
self.map.clear(); self.map.clear();
} }
/// Extends self with the items from another `Extensions`.
pub fn extend(&mut self, other: Extensions) {
self.map.extend(other.map);
}
/// Sets (or overrides) items from `other` into this map.
pub(crate) fn drain_from(&mut self, other: &mut Self) {
self.map.extend(mem::take(&mut other.map));
}
} }
impl fmt::Debug for Extensions { impl fmt::Debug for Extensions {
@ -178,4 +188,57 @@ mod tests {
assert_eq!(extensions.get::<bool>(), None); assert_eq!(extensions.get::<bool>(), None);
assert_eq!(extensions.get(), Some(&MyType(10))); assert_eq!(extensions.get(), Some(&MyType(10)));
} }
#[test]
fn test_extend() {
#[derive(Debug, PartialEq)]
struct MyType(i32);
let mut extensions = Extensions::new();
extensions.insert(5i32);
extensions.insert(MyType(10));
let mut other = Extensions::new();
other.insert(15i32);
other.insert(20u8);
extensions.extend(other);
assert_eq!(extensions.get(), Some(&15i32));
assert_eq!(extensions.get_mut(), Some(&mut 15i32));
assert_eq!(extensions.remove::<i32>(), Some(15i32));
assert!(extensions.get::<i32>().is_none());
assert_eq!(extensions.get::<bool>(), None);
assert_eq!(extensions.get(), Some(&MyType(10)));
assert_eq!(extensions.get(), Some(&20u8));
assert_eq!(extensions.get_mut(), Some(&mut 20u8));
}
#[test]
fn test_drain_from() {
let mut ext = Extensions::new();
ext.insert(2isize);
let mut more_ext = Extensions::new();
more_ext.insert(5isize);
more_ext.insert(5usize);
assert_eq!(ext.get::<isize>(), Some(&2isize));
assert_eq!(ext.get::<usize>(), None);
assert_eq!(more_ext.get::<isize>(), Some(&5isize));
assert_eq!(more_ext.get::<usize>(), Some(&5usize));
ext.drain_from(&mut more_ext);
assert_eq!(ext.get::<isize>(), Some(&5isize));
assert_eq!(ext.get::<usize>(), Some(&5usize));
assert_eq!(more_ext.get::<isize>(), None);
assert_eq!(more_ext.get::<usize>(), None);
}
} }

View File

@ -173,13 +173,12 @@ impl Decoder for ClientPayloadCodec {
} }
} }
impl Encoder for ClientCodec { impl Encoder<Message<(RequestHeadType, BodySize)>> for ClientCodec {
type Item = Message<(RequestHeadType, BodySize)>;
type Error = io::Error; type Error = io::Error;
fn encode( fn encode(
&mut self, &mut self,
item: Self::Item, item: Message<(RequestHeadType, BodySize)>,
dst: &mut BytesMut, dst: &mut BytesMut,
) -> Result<(), Self::Error> { ) -> Result<(), Self::Error> {
match item { match item {

View File

@ -58,6 +58,7 @@ impl Codec {
} else { } else {
Flags::empty() Flags::empty()
}; };
Codec { Codec {
config, config,
flags, flags,
@ -69,26 +70,26 @@ impl Codec {
} }
} }
/// Check if request is upgrade.
#[inline] #[inline]
/// Check if request is upgrade
pub fn upgrade(&self) -> bool { pub fn upgrade(&self) -> bool {
self.ctype == ConnectionType::Upgrade self.ctype == ConnectionType::Upgrade
} }
/// Check if last response is keep-alive.
#[inline] #[inline]
/// Check if last response is keep-alive
pub fn keepalive(&self) -> bool { pub fn keepalive(&self) -> bool {
self.ctype == ConnectionType::KeepAlive self.ctype == ConnectionType::KeepAlive
} }
/// Check if keep-alive enabled on server level.
#[inline] #[inline]
/// Check if keep-alive enabled on server level
pub fn keepalive_enabled(&self) -> bool { pub fn keepalive_enabled(&self) -> bool {
self.flags.contains(Flags::KEEPALIVE_ENABLED) self.flags.contains(Flags::KEEPALIVE_ENABLED)
} }
/// Check last request's message type.
#[inline] #[inline]
/// Check last request's message type
pub fn message_type(&self) -> MessageType { pub fn message_type(&self) -> MessageType {
if self.flags.contains(Flags::STREAM) { if self.flags.contains(Flags::STREAM) {
MessageType::Stream MessageType::Stream
@ -144,13 +145,12 @@ impl Decoder for Codec {
} }
} }
impl Encoder for Codec { impl Encoder<Message<(Response<()>, BodySize)>> for Codec {
type Item = Message<(Response<()>, BodySize)>;
type Error = io::Error; type Error = io::Error;
fn encode( fn encode(
&mut self, &mut self,
item: Self::Item, item: Message<(Response<()>, BodySize)>,
dst: &mut BytesMut, dst: &mut BytesMut,
) -> Result<(), Self::Error> { ) -> Result<(), Self::Error> {
match item { match item {

View File

@ -76,12 +76,14 @@ pub(crate) trait MessageType: Sized {
let name = let name =
HeaderName::from_bytes(&slice[idx.name.0..idx.name.1]).unwrap(); HeaderName::from_bytes(&slice[idx.name.0..idx.name.1]).unwrap();
// SAFETY: httparse checks header value is valid UTF-8 // SAFETY: httparse already checks header value is only visible ASCII bytes
// from_maybe_shared_unchecked contains debug assertions so they are omitted here
let value = unsafe { let value = unsafe {
HeaderValue::from_maybe_shared_unchecked( HeaderValue::from_maybe_shared_unchecked(
slice.slice(idx.value.0..idx.value.1), slice.slice(idx.value.0..idx.value.1),
) )
}; };
match name { match name {
header::CONTENT_LENGTH => { header::CONTENT_LENGTH => {
if let Ok(s) = value.to_str() { if let Ok(s) = value.to_str() {

View File

@ -1,8 +1,11 @@
use std::collections::VecDeque; use std::{
use std::future::Future; collections::VecDeque,
use std::pin::Pin; fmt,
use std::task::{Context, Poll}; future::Future,
use std::{fmt, io, net}; io, mem, net,
pin::Pin,
task::{Context, Poll},
};
use actix_codec::{AsyncRead, AsyncWrite, Decoder, Encoder, Framed, FramedParts}; use actix_codec::{AsyncRead, AsyncWrite, Decoder, Encoder, Framed, FramedParts};
use actix_rt::time::{delay_until, Delay, Instant}; use actix_rt::time::{delay_until, Delay, Instant};
@ -12,7 +15,6 @@ use bytes::{Buf, BytesMut};
use log::{error, trace}; use log::{error, trace};
use pin_project::pin_project; use pin_project::pin_project;
use crate::body::{Body, BodySize, MessageBody, ResponseBody};
use crate::cloneable::CloneableService; use crate::cloneable::CloneableService;
use crate::config::ServiceConfig; use crate::config::ServiceConfig;
use crate::error::{DispatchError, Error}; use crate::error::{DispatchError, Error};
@ -21,6 +23,10 @@ use crate::helpers::DataFactory;
use crate::httpmessage::HttpMessage; use crate::httpmessage::HttpMessage;
use crate::request::Request; use crate::request::Request;
use crate::response::Response; use crate::response::Response;
use crate::{
body::{Body, BodySize, MessageBody, ResponseBody},
Extensions,
};
use super::codec::Codec; use super::codec::Codec;
use super::payload::{Payload, PayloadSender, PayloadStatus}; use super::payload::{Payload, PayloadSender, PayloadStatus};
@ -56,6 +62,9 @@ where
{ {
#[pin] #[pin]
inner: DispatcherState<T, S, B, X, U>, inner: DispatcherState<T, S, B, X, U>,
#[cfg(test)]
poll_count: u64,
} }
#[pin_project(project = DispatcherStateProj)] #[pin_project(project = DispatcherStateProj)]
@ -88,6 +97,7 @@ where
expect: CloneableService<X>, expect: CloneableService<X>,
upgrade: Option<CloneableService<U>>, upgrade: Option<CloneableService<U>>,
on_connect: Option<Box<dyn DataFactory>>, on_connect: Option<Box<dyn DataFactory>>,
on_connect_data: Extensions,
flags: Flags, flags: Flags,
peer_addr: Option<net::SocketAddr>, peer_addr: Option<net::SocketAddr>,
error: Option<DispatchError>, error: Option<DispatchError>,
@ -120,8 +130,8 @@ where
B: MessageBody, B: MessageBody,
{ {
None, None,
ExpectCall(Pin<Box<X::Future>>), ExpectCall(#[pin] X::Future),
ServiceCall(Pin<Box<S::Future>>), ServiceCall(#[pin] S::Future),
SendPayload(#[pin] ResponseBody<B>), SendPayload(#[pin] ResponseBody<B>),
} }
@ -167,7 +177,7 @@ where
U: Service<Request = (Request, Framed<T, Codec>), Response = ()>, U: Service<Request = (Request, Framed<T, Codec>), Response = ()>,
U::Error: fmt::Display, U::Error: fmt::Display,
{ {
/// Create http/1 dispatcher. /// Create HTTP/1 dispatcher.
pub(crate) fn new( pub(crate) fn new(
stream: T, stream: T,
config: ServiceConfig, config: ServiceConfig,
@ -175,6 +185,7 @@ where
expect: CloneableService<X>, expect: CloneableService<X>,
upgrade: Option<CloneableService<U>>, upgrade: Option<CloneableService<U>>,
on_connect: Option<Box<dyn DataFactory>>, on_connect: Option<Box<dyn DataFactory>>,
on_connect_data: Extensions,
peer_addr: Option<net::SocketAddr>, peer_addr: Option<net::SocketAddr>,
) -> Self { ) -> Self {
Dispatcher::with_timeout( Dispatcher::with_timeout(
@ -187,6 +198,7 @@ where
expect, expect,
upgrade, upgrade,
on_connect, on_connect,
on_connect_data,
peer_addr, peer_addr,
) )
} }
@ -202,6 +214,7 @@ where
expect: CloneableService<X>, expect: CloneableService<X>,
upgrade: Option<CloneableService<U>>, upgrade: Option<CloneableService<U>>,
on_connect: Option<Box<dyn DataFactory>>, on_connect: Option<Box<dyn DataFactory>>,
on_connect_data: Extensions,
peer_addr: Option<net::SocketAddr>, peer_addr: Option<net::SocketAddr>,
) -> Self { ) -> Self {
let keepalive = config.keep_alive_enabled(); let keepalive = config.keep_alive_enabled();
@ -234,11 +247,15 @@ where
expect, expect,
upgrade, upgrade,
on_connect, on_connect,
on_connect_data,
flags, flags,
peer_addr, peer_addr,
ka_expire, ka_expire,
ka_timer, ka_timer,
}), }),
#[cfg(test)]
poll_count: 0,
} }
} }
} }
@ -314,11 +331,15 @@ where
Poll::Ready(Err(err)) => return Err(DispatchError::Io(err)), Poll::Ready(Err(err)) => return Err(DispatchError::Io(err)),
} }
} }
if written == write_buf.len() { if written == write_buf.len() {
// SAFETY: setting length to 0 is safe
// skips one length check vs truncate
unsafe { write_buf.set_len(0) } unsafe { write_buf.set_len(0) }
} else { } else {
write_buf.advance(written); write_buf.advance(written);
} }
Ok(false) Ok(false)
} }
@ -326,7 +347,7 @@ where
self: Pin<&mut Self>, self: Pin<&mut Self>,
message: Response<()>, message: Response<()>,
body: ResponseBody<B>, body: ResponseBody<B>,
) -> Result<State<S, B, X>, DispatchError> { ) -> Result<(), DispatchError> {
let mut this = self.project(); let mut this = self.project();
this.codec this.codec
.encode(Message::Item((message, body.size())), &mut this.write_buf) .encode(Message::Item((message, body.size())), &mut this.write_buf)
@ -339,9 +360,10 @@ where
this.flags.set(Flags::KEEPALIVE, this.codec.keepalive()); this.flags.set(Flags::KEEPALIVE, this.codec.keepalive());
match body.size() { match body.size() {
BodySize::None | BodySize::Empty => Ok(State::None), BodySize::None | BodySize::Empty => this.state.set(State::None),
_ => Ok(State::SendPayload(body)), _ => this.state.set(State::SendPayload(body)),
} };
Ok(())
} }
fn send_continue(self: Pin<&mut Self>) { fn send_continue(self: Pin<&mut Self>) {
@ -356,49 +378,52 @@ where
) -> Result<PollResponse, DispatchError> { ) -> Result<PollResponse, DispatchError> {
loop { loop {
let mut this = self.as_mut().project(); let mut this = self.as_mut().project();
let state = match this.state.project() { // state is not changed on Poll::Pending.
// other variant and conditions always trigger a state change(or an error).
let state_change = match this.state.project() {
StateProj::None => match this.messages.pop_front() { StateProj::None => match this.messages.pop_front() {
Some(DispatcherMessage::Item(req)) => { Some(DispatcherMessage::Item(req)) => {
Some(self.as_mut().handle_request(req, cx)?) self.as_mut().handle_request(req, cx)?;
true
} }
Some(DispatcherMessage::Error(res)) => Some( Some(DispatcherMessage::Error(res)) => {
self.as_mut() self.as_mut()
.send_response(res, ResponseBody::Other(Body::Empty))?, .send_response(res, ResponseBody::Other(Body::Empty))?;
), true
}
Some(DispatcherMessage::Upgrade(req)) => { Some(DispatcherMessage::Upgrade(req)) => {
return Ok(PollResponse::Upgrade(req)); return Ok(PollResponse::Upgrade(req));
} }
None => None, None => false,
}, },
StateProj::ExpectCall(fut) => match fut.as_mut().poll(cx) { StateProj::ExpectCall(fut) => match fut.poll(cx) {
Poll::Ready(Ok(req)) => { Poll::Ready(Ok(req)) => {
self.as_mut().send_continue(); self.as_mut().send_continue();
this = self.as_mut().project(); this = self.as_mut().project();
this.state this.state.set(State::ServiceCall(this.service.call(req)));
.set(State::ServiceCall(Box::pin(this.service.call(req))));
continue; continue;
} }
Poll::Ready(Err(e)) => { Poll::Ready(Err(e)) => {
let res: Response = e.into().into(); let res: Response = e.into().into();
let (res, body) = res.replace_body(()); let (res, body) = res.replace_body(());
Some(self.as_mut().send_response(res, body.into_body())?) self.as_mut().send_response(res, body.into_body())?;
true
} }
Poll::Pending => None, Poll::Pending => false,
}, },
StateProj::ServiceCall(fut) => match fut.as_mut().poll(cx) { StateProj::ServiceCall(fut) => match fut.poll(cx) {
Poll::Ready(Ok(res)) => { Poll::Ready(Ok(res)) => {
let (res, body) = res.into().replace_body(()); let (res, body) = res.into().replace_body(());
let state = self.as_mut().send_response(res, body)?; self.as_mut().send_response(res, body)?;
this = self.as_mut().project();
this.state.set(state);
continue; continue;
} }
Poll::Ready(Err(e)) => { Poll::Ready(Err(e)) => {
let res: Response = e.into().into(); let res: Response = e.into().into();
let (res, body) = res.replace_body(()); let (res, body) = res.replace_body(());
Some(self.as_mut().send_response(res, body.into_body())?) self.as_mut().send_response(res, body.into_body())?;
true
} }
Poll::Pending => None, Poll::Pending => false,
}, },
StateProj::SendPayload(mut stream) => { StateProj::SendPayload(mut stream) => {
loop { loop {
@ -433,11 +458,8 @@ where
} }
}; };
this = self.as_mut().project(); // state is changed and continue when the state is not Empty
if state_change {
// set new state
if let Some(state) = state {
this.state.set(state);
if !self.state.is_empty() { if !self.state.is_empty() {
continue; continue;
} }
@ -462,49 +484,77 @@ where
mut self: Pin<&mut Self>, mut self: Pin<&mut Self>,
req: Request, req: Request,
cx: &mut Context<'_>, cx: &mut Context<'_>,
) -> Result<State<S, B, X>, DispatchError> { ) -> Result<(), DispatchError> {
// Handle `EXPECT: 100-Continue` header // Handle `EXPECT: 100-Continue` header
let req = if req.head().expect() { if req.head().expect() {
let mut task = Box::pin(self.as_mut().project().expect.call(req)); // set dispatcher state so the future is pinned.
match task.as_mut().poll(cx) { let task = self.as_mut().project().expect.call(req);
Poll::Ready(Ok(req)) => { self.as_mut().project().state.set(State::ExpectCall(task));
self.as_mut().send_continue();
req
}
Poll::Pending => return Ok(State::ExpectCall(task)),
Poll::Ready(Err(e)) => {
let e = e.into();
let res: Response = e.into();
let (res, body) = res.replace_body(());
return self.send_response(res, body.into_body());
}
}
} else { } else {
req // the same as above.
let task = self.as_mut().project().service.call(req);
self.as_mut().project().state.set(State::ServiceCall(task));
}; };
// Call service // eagerly poll the future for once(or twice if expect is resolved immediately).
let mut task = Box::pin(self.as_mut().project().service.call(req)); loop {
match task.as_mut().poll(cx) { match self.as_mut().project().state.project() {
Poll::Ready(Ok(res)) => { StateProj::ExpectCall(fut) => {
let (res, body) = res.into().replace_body(()); match fut.poll(cx) {
self.send_response(res, body) // expect is resolved. continue loop and poll the service call branch.
} Poll::Ready(Ok(req)) => {
Poll::Pending => Ok(State::ServiceCall(task)), self.as_mut().send_continue();
Poll::Ready(Err(e)) => { let task = self.as_mut().project().service.call(req);
let res: Response = e.into().into(); self.as_mut().project().state.set(State::ServiceCall(task));
let (res, body) = res.replace_body(()); continue;
self.send_response(res, body.into_body()) }
// future is pending. return Ok(()) to notify that a new state is
// set and the outer loop should be continue.
Poll::Pending => return Ok(()),
// future is error. send response and return a result. On success
// to notify the dispatcher a new state is set and the outer loop
// should be continue.
Poll::Ready(Err(e)) => {
let e = e.into();
let res: Response = e.into();
let (res, body) = res.replace_body(());
return self.send_response(res, body.into_body());
}
}
}
StateProj::ServiceCall(fut) => {
// return no matter the service call future's result.
return match fut.poll(cx) {
// future is resolved. send response and return a result. On success
// to notify the dispatcher a new state is set and the outer loop
// should be continue.
Poll::Ready(Ok(res)) => {
let (res, body) = res.into().replace_body(());
self.send_response(res, body)
}
// see the comment on ExpectCall state branch's Pending.
Poll::Pending => Ok(()),
// see the comment on ExpectCall state branch's Ready(Err(e)).
Poll::Ready(Err(e)) => {
let res: Response = e.into().into();
let (res, body) = res.replace_body(());
self.send_response(res, body.into_body())
}
};
}
_ => unreachable!(
"State must be set to ServiceCall or ExceptCall in handle_request"
),
} }
} }
} }
/// Process one incoming requests /// Process one incoming request.
pub(self) fn poll_request( pub(self) fn poll_request(
mut self: Pin<&mut Self>, mut self: Pin<&mut Self>,
cx: &mut Context<'_>, cx: &mut Context<'_>,
) -> Result<bool, DispatchError> { ) -> Result<bool, DispatchError> {
// limit a mount of non processed requests // limit amount of non-processed requests
if self.messages.len() >= MAX_PIPELINED_MESSAGES || !self.can_read(cx) { if self.messages.len() >= MAX_PIPELINED_MESSAGES || !self.can_read(cx) {
return Ok(false); return Ok(false);
} }
@ -522,11 +572,15 @@ where
let pl = this.codec.message_type(); let pl = this.codec.message_type();
req.head_mut().peer_addr = *this.peer_addr; req.head_mut().peer_addr = *this.peer_addr;
// DEPRECATED
// set on_connect data // set on_connect data
if let Some(ref on_connect) = this.on_connect { if let Some(ref on_connect) = this.on_connect {
on_connect.set(&mut req.extensions_mut()); on_connect.set(&mut req.extensions_mut());
} }
// merge on_connect_ext data into request extensions
req.extensions_mut().drain_from(this.on_connect_data);
if pl == MessageType::Stream && this.upgrade.is_some() { if pl == MessageType::Stream && this.upgrade.is_some() {
this.messages.push_back(DispatcherMessage::Upgrade(req)); this.messages.push_back(DispatcherMessage::Upgrade(req));
break; break;
@ -541,9 +595,8 @@ where
// handle request early // handle request early
if this.state.is_empty() { if this.state.is_empty() {
let state = self.as_mut().handle_request(req, cx)?; self.as_mut().handle_request(req, cx)?;
this = self.as_mut().project(); this = self.as_mut().project();
this.state.set(state);
} else { } else {
this.messages.push_back(DispatcherMessage::Item(req)); this.messages.push_back(DispatcherMessage::Item(req));
} }
@ -709,6 +762,12 @@ where
#[inline] #[inline]
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> { fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.as_mut().project(); let this = self.as_mut().project();
#[cfg(test)]
{
*this.poll_count += 1;
}
match this.inner.project() { match this.inner.project() {
DispatcherStateProj::Normal(mut inner) => { DispatcherStateProj::Normal(mut inner) => {
inner.as_mut().poll_keepalive(cx)?; inner.as_mut().poll_keepalive(cx)?;
@ -772,10 +831,10 @@ where
let inner_p = inner.as_mut().project(); let inner_p = inner.as_mut().project();
let mut parts = FramedParts::with_read_buf( let mut parts = FramedParts::with_read_buf(
inner_p.io.take().unwrap(), inner_p.io.take().unwrap(),
std::mem::take(inner_p.codec), mem::take(inner_p.codec),
std::mem::take(inner_p.read_buf), mem::take(inner_p.read_buf),
); );
parts.write_buf = std::mem::take(inner_p.write_buf); parts.write_buf = mem::take(inner_p.write_buf);
let framed = Framed::from_parts(parts); let framed = Framed::from_parts(parts);
let upgrade = let upgrade =
inner_p.upgrade.take().unwrap().call((req, framed)); inner_p.upgrade.take().unwrap().call((req, framed));
@ -787,8 +846,11 @@ where
} }
// we didn't get WouldBlock from write operation, // we didn't get WouldBlock from write operation,
// so data get written to kernel completely (OSX) // so data get written to kernel completely (macOS)
// and we have to write again otherwise response can get stuck // and we have to write again otherwise response can get stuck
//
// TODO: what? is WouldBlock good or bad?
// want to find a reference for this macOS behavior
if inner.as_mut().poll_flush(cx)? || !drain { if inner.as_mut().poll_flush(cx)? || !drain {
break; break;
} }
@ -838,6 +900,11 @@ where
} }
} }
/// Returns either:
/// - `Ok(Some(true))` - data was read and done reading all data.
/// - `Ok(Some(false))` - data was read but there should be more to read.
/// - `Ok(None)` - no data was read but there should be more to read later.
/// - Unhandled Errors
fn read_available<T>( fn read_available<T>(
cx: &mut Context<'_>, cx: &mut Context<'_>,
io: &mut T, io: &mut T,
@ -871,17 +938,17 @@ where
read_some = true; read_some = true;
} }
} }
Poll::Ready(Err(e)) => { Poll::Ready(Err(err)) => {
return if e.kind() == io::ErrorKind::WouldBlock { return if err.kind() == io::ErrorKind::WouldBlock {
if read_some { if read_some {
Ok(Some(false)) Ok(Some(false))
} else { } else {
Ok(None) Ok(None)
} }
} else if e.kind() == io::ErrorKind::ConnectionReset && read_some { } else if err.kind() == io::ErrorKind::ConnectionReset && read_some {
Ok(Some(true)) Ok(Some(true))
} else { } else {
Err(e) Err(err)
} }
} }
} }
@ -901,43 +968,376 @@ where
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use actix_service::IntoService; use std::{marker::PhantomData, str};
use futures_util::future::{lazy, ok};
use actix_service::fn_service;
use futures_util::future::{lazy, ready};
use super::*; use super::*;
use crate::error::Error;
use crate::h1::{ExpectHandler, UpgradeHandler};
use crate::test::TestBuffer; use crate::test::TestBuffer;
use crate::{error::Error, KeepAlive};
use crate::{
h1::{ExpectHandler, UpgradeHandler},
test::TestSeqBuffer,
};
fn find_slice(haystack: &[u8], needle: &[u8], from: usize) -> Option<usize> {
haystack[from..]
.windows(needle.len())
.position(|window| window == needle)
}
fn stabilize_date_header(payload: &mut [u8]) {
let mut from = 0;
while let Some(pos) = find_slice(&payload, b"date", from) {
payload[(from + pos)..(from + pos + 35)]
.copy_from_slice(b"date: Thu, 01 Jan 1970 12:34:56 UTC");
from += 35;
}
}
fn ok_service() -> impl Service<Request = Request, Response = Response, Error = Error>
{
fn_service(|_req: Request| ready(Ok::<_, Error>(Response::Ok().finish())))
}
fn echo_path_service(
) -> impl Service<Request = Request, Response = Response, Error = Error> {
fn_service(|req: Request| {
let path = req.path().as_bytes();
ready(Ok::<_, Error>(Response::Ok().body(Body::from_slice(path))))
})
}
fn echo_payload_service(
) -> impl Service<Request = Request, Response = Response, Error = Error> {
fn_service(|mut req: Request| {
Box::pin(async move {
use futures_util::stream::StreamExt as _;
let mut pl = req.take_payload();
let mut body = BytesMut::new();
while let Some(chunk) = pl.next().await {
body.extend_from_slice(chunk.unwrap().bytes())
}
Ok::<_, Error>(Response::Ok().body(body))
})
})
}
#[actix_rt::test] #[actix_rt::test]
async fn test_req_parse_err() { async fn test_req_parse_err() {
lazy(|cx| { lazy(|cx| {
let buf = TestBuffer::new("GET /test HTTP/1\r\n\r\n"); let buf = TestBuffer::new("GET /test HTTP/1\r\n\r\n");
let mut h1 = Dispatcher::<_, _, _, _, UpgradeHandler<TestBuffer>>::new( let h1 = Dispatcher::<_, _, _, _, UpgradeHandler<TestBuffer>>::new(
buf, buf,
ServiceConfig::default(), ServiceConfig::default(),
CloneableService::new( CloneableService::new(ok_service()),
(|_| ok::<_, Error>(Response::Ok().finish())).into_service(),
),
CloneableService::new(ExpectHandler), CloneableService::new(ExpectHandler),
None, None,
None, None,
Extensions::new(),
None, None,
); );
match Pin::new(&mut h1).poll(cx) {
futures_util::pin_mut!(h1);
match h1.as_mut().poll(cx) {
Poll::Pending => panic!(), Poll::Pending => panic!(),
Poll::Ready(res) => assert!(res.is_err()), Poll::Ready(res) => assert!(res.is_err()),
} }
if let DispatcherState::Normal(ref mut inner) = h1.inner { if let DispatcherStateProj::Normal(inner) = h1.project().inner.project() {
assert!(inner.flags.contains(Flags::READ_DISCONNECT)); assert!(inner.flags.contains(Flags::READ_DISCONNECT));
assert_eq!( assert_eq!(
&inner.io.take().unwrap().write_buf[..26], &inner.project().io.take().unwrap().write_buf[..26],
b"HTTP/1.1 400 Bad Request\r\n" b"HTTP/1.1 400 Bad Request\r\n"
); );
} }
}) })
.await; .await;
} }
#[actix_rt::test]
async fn test_pipelining() {
lazy(|cx| {
let buf = TestBuffer::new(
"\
GET /abcd HTTP/1.1\r\n\r\n\
GET /def HTTP/1.1\r\n\r\n\
",
);
let cfg = ServiceConfig::new(KeepAlive::Disabled, 1, 1, false, None);
let h1 = Dispatcher::<_, _, _, _, UpgradeHandler<TestBuffer>>::new(
buf,
cfg,
CloneableService::new(echo_path_service()),
CloneableService::new(ExpectHandler),
None,
None,
Extensions::new(),
None,
);
futures_util::pin_mut!(h1);
assert!(matches!(&h1.inner, DispatcherState::Normal(_)));
match h1.as_mut().poll(cx) {
Poll::Pending => panic!("first poll should not be pending"),
Poll::Ready(res) => assert!(res.is_ok()),
}
// polls: initial => shutdown
assert_eq!(h1.poll_count, 2);
if let DispatcherStateProj::Normal(inner) = h1.project().inner.project() {
let res = &mut inner.project().io.take().unwrap().write_buf[..];
stabilize_date_header(res);
let exp = b"\
HTTP/1.1 200 OK\r\n\
content-length: 5\r\n\
connection: close\r\n\
date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\
/abcd\
HTTP/1.1 200 OK\r\n\
content-length: 4\r\n\
connection: close\r\n\
date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\
/def\
";
assert_eq!(res.to_vec(), exp.to_vec());
}
})
.await;
lazy(|cx| {
let buf = TestBuffer::new(
"\
GET /abcd HTTP/1.1\r\n\r\n\
GET /def HTTP/1\r\n\r\n\
",
);
let cfg = ServiceConfig::new(KeepAlive::Disabled, 1, 1, false, None);
let h1 = Dispatcher::<_, _, _, _, UpgradeHandler<TestBuffer>>::new(
buf,
cfg,
CloneableService::new(echo_path_service()),
CloneableService::new(ExpectHandler),
None,
None,
Extensions::new(),
None,
);
futures_util::pin_mut!(h1);
assert!(matches!(&h1.inner, DispatcherState::Normal(_)));
match h1.as_mut().poll(cx) {
Poll::Pending => panic!("first poll should not be pending"),
Poll::Ready(res) => assert!(res.is_err()),
}
// polls: initial => shutdown
assert_eq!(h1.poll_count, 1);
if let DispatcherStateProj::Normal(inner) = h1.project().inner.project() {
let res = &mut inner.project().io.take().unwrap().write_buf[..];
stabilize_date_header(res);
let exp = b"\
HTTP/1.1 200 OK\r\n\
content-length: 5\r\n\
connection: close\r\n\
date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\
/abcd\
HTTP/1.1 400 Bad Request\r\n\
content-length: 0\r\n\
connection: close\r\n\
date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\
";
assert_eq!(res.to_vec(), exp.to_vec());
}
})
.await;
}
#[actix_rt::test]
async fn test_expect() {
lazy(|cx| {
let mut buf = TestSeqBuffer::empty();
let cfg = ServiceConfig::new(KeepAlive::Disabled, 0, 0, false, None);
let h1 = Dispatcher::<_, _, _, _, UpgradeHandler<_>>::new(
buf.clone(),
cfg,
CloneableService::new(echo_payload_service()),
CloneableService::new(ExpectHandler),
None,
None,
Extensions::new(),
None,
);
buf.extend_read_buf(
"\
POST /upload HTTP/1.1\r\n\
Content-Length: 5\r\n\
Expect: 100-continue\r\n\
\r\n\
",
);
futures_util::pin_mut!(h1);
assert!(h1.as_mut().poll(cx).is_pending());
assert!(matches!(&h1.inner, DispatcherState::Normal(_)));
// polls: manual
assert_eq!(h1.poll_count, 1);
eprintln!("poll count: {}", h1.poll_count);
if let DispatcherState::Normal(ref inner) = h1.inner {
let io = inner.io.as_ref().unwrap();
let res = &io.write_buf()[..];
assert_eq!(
str::from_utf8(res).unwrap(),
"HTTP/1.1 100 Continue\r\n\r\n"
);
}
buf.extend_read_buf("12345");
assert!(h1.as_mut().poll(cx).is_ready());
// polls: manual manual shutdown
assert_eq!(h1.poll_count, 3);
if let DispatcherState::Normal(ref inner) = h1.inner {
let io = inner.io.as_ref().unwrap();
let mut res = (&io.write_buf()[..]).to_owned();
stabilize_date_header(&mut res);
assert_eq!(
str::from_utf8(&res).unwrap(),
"\
HTTP/1.1 100 Continue\r\n\
\r\n\
HTTP/1.1 200 OK\r\n\
content-length: 5\r\n\
connection: close\r\n\
date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\
\r\n\
12345\
"
);
}
})
.await;
}
#[actix_rt::test]
async fn test_eager_expect() {
lazy(|cx| {
let mut buf = TestSeqBuffer::empty();
let cfg = ServiceConfig::new(KeepAlive::Disabled, 0, 0, false, None);
let h1 = Dispatcher::<_, _, _, _, UpgradeHandler<_>>::new(
buf.clone(),
cfg,
CloneableService::new(echo_path_service()),
CloneableService::new(ExpectHandler),
None,
None,
Extensions::new(),
None,
);
buf.extend_read_buf(
"\
POST /upload HTTP/1.1\r\n\
Content-Length: 5\r\n\
Expect: 100-continue\r\n\
\r\n\
",
);
futures_util::pin_mut!(h1);
assert!(h1.as_mut().poll(cx).is_ready());
assert!(matches!(&h1.inner, DispatcherState::Normal(_)));
// polls: manual shutdown
assert_eq!(h1.poll_count, 2);
if let DispatcherState::Normal(ref inner) = h1.inner {
let io = inner.io.as_ref().unwrap();
let mut res = (&io.write_buf()[..]).to_owned();
stabilize_date_header(&mut res);
// Despite the content-length header and even though the request payload has not
// been sent, this test expects a complete service response since the payload
// is not used at all. The service passed to dispatcher is path echo and doesn't
// consume payload bytes.
assert_eq!(
str::from_utf8(&res).unwrap(),
"\
HTTP/1.1 100 Continue\r\n\
\r\n\
HTTP/1.1 200 OK\r\n\
content-length: 7\r\n\
connection: close\r\n\
date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\
\r\n\
/upload\
"
);
}
})
.await;
}
#[actix_rt::test]
async fn test_upgrade() {
lazy(|cx| {
let mut buf = TestSeqBuffer::empty();
let cfg = ServiceConfig::new(KeepAlive::Disabled, 0, 0, false, None);
let h1 = Dispatcher::<_, _, _, _, UpgradeHandler<_>>::new(
buf.clone(),
cfg,
CloneableService::new(ok_service()),
CloneableService::new(ExpectHandler),
Some(CloneableService::new(UpgradeHandler(PhantomData))),
None,
Extensions::new(),
None,
);
buf.extend_read_buf(
"\
GET /ws HTTP/1.1\r\n\
Connection: Upgrade\r\n\
Upgrade: websocket\r\n\
\r\n\
",
);
futures_util::pin_mut!(h1);
assert!(h1.as_mut().poll(cx).is_ready());
assert!(matches!(&h1.inner, DispatcherState::Upgrade(_)));
// polls: manual shutdown
assert_eq!(h1.poll_count, 2);
})
.await;
}
} }

View File

@ -64,14 +64,17 @@ pub(crate) trait MessageType: Sized {
// Content length // Content length
if let Some(status) = self.status() { if let Some(status) = self.status() {
match status { match status {
StatusCode::NO_CONTENT StatusCode::CONTINUE
| StatusCode::CONTINUE | StatusCode::SWITCHING_PROTOCOLS
| StatusCode::PROCESSING => length = BodySize::None, | StatusCode::PROCESSING
StatusCode::SWITCHING_PROTOCOLS => { | StatusCode::NO_CONTENT => {
// skip content-length and transfer-encoding headers
// See https://tools.ietf.org/html/rfc7230#section-3.3.1
// and https://tools.ietf.org/html/rfc7230#section-3.3.2
skip_len = true; skip_len = true;
length = BodySize::Stream; length = BodySize::None
} }
_ => (), _ => {}
} }
} }
match length { match length {
@ -129,89 +132,133 @@ pub(crate) trait MessageType: Sized {
.chain(extra_headers.inner.iter()); .chain(extra_headers.inner.iter());
// write headers // write headers
let mut pos = 0;
let mut has_date = false; let mut has_date = false;
let mut remaining = dst.capacity() - dst.len();
let mut buf = dst.bytes_mut().as_mut_ptr() as *mut u8; let mut buf = dst.bytes_mut().as_mut_ptr() as *mut u8;
let mut remaining = dst.capacity() - dst.len();
// tracks bytes written since last buffer resize
// since buf is a raw pointer to a bytes container storage but is written to without the
// container's knowledge, this is used to sync the containers cursor after data is written
let mut pos = 0;
for (key, value) in headers { for (key, value) in headers {
match *key { match *key {
CONNECTION => continue, CONNECTION => continue,
TRANSFER_ENCODING | CONTENT_LENGTH if skip_len => continue, TRANSFER_ENCODING | CONTENT_LENGTH if skip_len => continue,
DATE => { DATE => has_date = true,
has_date = true;
}
_ => (), _ => (),
} }
let k = key.as_str().as_bytes(); let k = key.as_str().as_bytes();
let k_len = k.len();
match value { match value {
map::Value::One(ref val) => { map::Value::One(ref val) => {
let v = val.as_ref(); let v = val.as_ref();
let v_len = v.len(); let v_len = v.len();
let k_len = k.len();
// key length + value length + colon + space + \r\n
let len = k_len + v_len + 4; let len = k_len + v_len + 4;
if len > remaining { if len > remaining {
// not enough room in buffer for this header; reserve more space
// SAFETY: all the bytes written up to position "pos" are initialized
// the written byte count and pointer advancement are kept in sync
unsafe { unsafe {
dst.advance_mut(pos); dst.advance_mut(pos);
} }
pos = 0; pos = 0;
dst.reserve(len * 2); dst.reserve(len * 2);
remaining = dst.capacity() - dst.len(); remaining = dst.capacity() - dst.len();
// re-assign buf raw pointer since it's possible that the buffer was
// reallocated and/or resized
buf = dst.bytes_mut().as_mut_ptr() as *mut u8; buf = dst.bytes_mut().as_mut_ptr() as *mut u8;
} }
// use upper Camel-Case
// SAFETY: on each write, it is enough to ensure that the advancement of the
// cursor matches the number of bytes written
unsafe { unsafe {
// use upper Camel-Case
if camel_case { if camel_case {
write_camel_case(k, from_raw_parts_mut(buf, k_len)) write_camel_case(k, from_raw_parts_mut(buf, k_len))
} else { } else {
write_data(k, buf, k_len) write_data(k, buf, k_len)
} }
buf = buf.add(k_len); buf = buf.add(k_len);
write_data(b": ", buf, 2); write_data(b": ", buf, 2);
buf = buf.add(2); buf = buf.add(2);
write_data(v, buf, v_len); write_data(v, buf, v_len);
buf = buf.add(v_len); buf = buf.add(v_len);
write_data(b"\r\n", buf, 2); write_data(b"\r\n", buf, 2);
buf = buf.add(2); buf = buf.add(2);
pos += len;
remaining -= len;
} }
pos += len;
remaining -= len;
} }
map::Value::Multi(ref vec) => { map::Value::Multi(ref vec) => {
for val in vec { for val in vec {
let v = val.as_ref(); let v = val.as_ref();
let v_len = v.len(); let v_len = v.len();
let k_len = k.len();
let len = k_len + v_len + 4; let len = k_len + v_len + 4;
if len > remaining { if len > remaining {
// SAFETY: all the bytes written up to position "pos" are initialized
// the written byte count and pointer advancement are kept in sync
unsafe { unsafe {
dst.advance_mut(pos); dst.advance_mut(pos);
} }
pos = 0; pos = 0;
dst.reserve(len * 2); dst.reserve(len * 2);
remaining = dst.capacity() - dst.len(); remaining = dst.capacity() - dst.len();
// re-assign buf raw pointer since it's possible that the buffer was
// reallocated and/or resized
buf = dst.bytes_mut().as_mut_ptr() as *mut u8; buf = dst.bytes_mut().as_mut_ptr() as *mut u8;
} }
// use upper Camel-Case
// SAFETY: on each write, it is enough to ensure that the advancement of
// the cursor matches the number of bytes written
unsafe { unsafe {
if camel_case { if camel_case {
write_camel_case(k, from_raw_parts_mut(buf, k_len)); write_camel_case(k, from_raw_parts_mut(buf, k_len));
} else { } else {
write_data(k, buf, k_len); write_data(k, buf, k_len);
} }
buf = buf.add(k_len); buf = buf.add(k_len);
write_data(b": ", buf, 2); write_data(b": ", buf, 2);
buf = buf.add(2); buf = buf.add(2);
write_data(v, buf, v_len); write_data(v, buf, v_len);
buf = buf.add(v_len); buf = buf.add(v_len);
write_data(b"\r\n", buf, 2); write_data(b"\r\n", buf, 2);
buf = buf.add(2); buf = buf.add(2);
}; };
pos += len; pos += len;
remaining -= len; remaining -= len;
} }
} }
} }
} }
// final cursor synchronization with the bytes container
//
// SAFETY: all the bytes written up to position "pos" are initialized
// the written byte count and pointer advancement are kept in sync
unsafe { unsafe {
dst.advance_mut(pos); dst.advance_mut(pos);
} }
@ -477,7 +524,10 @@ impl<'a> io::Write for Writer<'a> {
} }
} }
/// # Safety
/// Callers must ensure that the given length matches given value length.
unsafe fn write_data(value: &[u8], buf: *mut u8, len: usize) { unsafe fn write_data(value: &[u8], buf: *mut u8, len: usize) {
debug_assert_eq!(value.len(), len);
copy_nonoverlapping(value.as_ptr(), buf, len); copy_nonoverlapping(value.as_ptr(), buf, len);
} }
@ -629,4 +679,28 @@ mod tests {
assert!(data.contains("authorization: another authorization\r\n")); assert!(data.contains("authorization: another authorization\r\n"));
assert!(data.contains("date: date\r\n")); assert!(data.contains("date: date\r\n"));
} }
#[test]
fn test_no_content_length() {
let mut bytes = BytesMut::with_capacity(2048);
let mut res: Response<()> =
Response::new(StatusCode::SWITCHING_PROTOCOLS).into_body::<()>();
res.headers_mut()
.insert(DATE, HeaderValue::from_static(&""));
res.headers_mut()
.insert(CONTENT_LENGTH, HeaderValue::from_static(&"0"));
let _ = res.encode_headers(
&mut bytes,
Version::HTTP_11,
BodySize::Stream,
ConnectionType::Upgrade,
&ServiceConfig::default(),
);
let data =
String::from_utf8(Vec::from(bytes.split().freeze().as_ref())).unwrap();
assert!(!data.contains("content-length: 0\r\n"));
assert!(!data.contains("transfer-encoding: chunked\r\n"));
}
} }

View File

@ -1,7 +1,7 @@
use std::task::{Context, Poll}; use std::task::{Context, Poll};
use actix_service::{Service, ServiceFactory}; use actix_service::{Service, ServiceFactory};
use futures_util::future::{ok, Ready}; use futures_util::future::{ready, Ready};
use crate::error::Error; use crate::error::Error;
use crate::request::Request; use crate::request::Request;
@ -17,8 +17,8 @@ impl ServiceFactory for ExpectHandler {
type InitError = Error; type InitError = Error;
type Future = Ready<Result<Self::Service, Self::InitError>>; type Future = Ready<Result<Self::Service, Self::InitError>>;
fn new_service(&self, _: ()) -> Self::Future { fn new_service(&self, _: Self::Config) -> Self::Future {
ok(ExpectHandler) ready(Ok(ExpectHandler))
} }
} }
@ -33,6 +33,8 @@ impl Service for ExpectHandler {
} }
fn call(&mut self, req: Request) -> Self::Future { fn call(&mut self, req: Request) -> Self::Future {
ok(req) ready(Ok(req))
// TODO: add some way to trigger error
// Err(error::ErrorExpectationFailed("test"))
} }
} }

View File

@ -182,9 +182,7 @@ impl Inner {
self.len += data.len(); self.len += data.len();
self.items.push_back(data); self.items.push_back(data);
self.need_read = self.len < MAX_BUFFER_SIZE; self.need_read = self.len < MAX_BUFFER_SIZE;
if let Some(task) = self.task.take() { self.task.wake();
task.wake()
}
} }
#[cfg(test)] #[cfg(test)]

View File

@ -18,6 +18,7 @@ use crate::error::{DispatchError, Error, ParseError};
use crate::helpers::DataFactory; use crate::helpers::DataFactory;
use crate::request::Request; use crate::request::Request;
use crate::response::Response; use crate::response::Response;
use crate::{ConnectCallback, Extensions};
use super::codec::Codec; use super::codec::Codec;
use super::dispatcher::Dispatcher; use super::dispatcher::Dispatcher;
@ -30,6 +31,7 @@ pub struct H1Service<T, S, B, X = ExpectHandler, U = UpgradeHandler<T>> {
expect: X, expect: X,
upgrade: Option<U>, upgrade: Option<U>,
on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
_t: PhantomData<(T, B)>, _t: PhantomData<(T, B)>,
} }
@ -52,6 +54,7 @@ where
expect: ExpectHandler, expect: ExpectHandler,
upgrade: None, upgrade: None,
on_connect: None, on_connect: None,
on_connect_ext: None,
_t: PhantomData, _t: PhantomData,
} }
} }
@ -98,7 +101,7 @@ mod openssl {
use super::*; use super::*;
use actix_tls::openssl::{Acceptor, SslAcceptor, SslStream}; use actix_tls::openssl::{Acceptor, SslAcceptor, SslStream};
use actix_tls::{openssl::HandshakeError, SslError}; use actix_tls::{openssl::HandshakeError, TlsError};
impl<S, B, X, U> H1Service<SslStream<TcpStream>, S, B, X, U> impl<S, B, X, U> H1Service<SslStream<TcpStream>, S, B, X, U>
where where
@ -126,19 +129,19 @@ mod openssl {
Config = (), Config = (),
Request = TcpStream, Request = TcpStream,
Response = (), Response = (),
Error = SslError<HandshakeError<TcpStream>, DispatchError>, Error = TlsError<HandshakeError<TcpStream>, DispatchError>,
InitError = (), InitError = (),
> { > {
pipeline_factory( pipeline_factory(
Acceptor::new(acceptor) Acceptor::new(acceptor)
.map_err(SslError::Ssl) .map_err(TlsError::Tls)
.map_init_err(|_| panic!()), .map_init_err(|_| panic!()),
) )
.and_then(|io: SslStream<TcpStream>| { .and_then(|io: SslStream<TcpStream>| {
let peer_addr = io.get_ref().peer_addr().ok(); let peer_addr = io.get_ref().peer_addr().ok();
ok((io, peer_addr)) ok((io, peer_addr))
}) })
.and_then(self.map_err(SslError::Service)) .and_then(self.map_err(TlsError::Service))
} }
} }
} }
@ -147,7 +150,7 @@ mod openssl {
mod rustls { mod rustls {
use super::*; use super::*;
use actix_tls::rustls::{Acceptor, ServerConfig, TlsStream}; use actix_tls::rustls::{Acceptor, ServerConfig, TlsStream};
use actix_tls::SslError; use actix_tls::TlsError;
use std::{fmt, io}; use std::{fmt, io};
impl<S, B, X, U> H1Service<TlsStream<TcpStream>, S, B, X, U> impl<S, B, X, U> H1Service<TlsStream<TcpStream>, S, B, X, U>
@ -176,19 +179,19 @@ mod rustls {
Config = (), Config = (),
Request = TcpStream, Request = TcpStream,
Response = (), Response = (),
Error = SslError<io::Error, DispatchError>, Error = TlsError<io::Error, DispatchError>,
InitError = (), InitError = (),
> { > {
pipeline_factory( pipeline_factory(
Acceptor::new(config) Acceptor::new(config)
.map_err(SslError::Ssl) .map_err(TlsError::Tls)
.map_init_err(|_| panic!()), .map_init_err(|_| panic!()),
) )
.and_then(|io: TlsStream<TcpStream>| { .and_then(|io: TlsStream<TcpStream>| {
let peer_addr = io.get_ref().0.peer_addr().ok(); let peer_addr = io.get_ref().0.peer_addr().ok();
ok((io, peer_addr)) ok((io, peer_addr))
}) })
.and_then(self.map_err(SslError::Service)) .and_then(self.map_err(TlsError::Service))
} }
} }
} }
@ -213,6 +216,7 @@ where
srv: self.srv, srv: self.srv,
upgrade: self.upgrade, upgrade: self.upgrade,
on_connect: self.on_connect, on_connect: self.on_connect,
on_connect_ext: self.on_connect_ext,
_t: PhantomData, _t: PhantomData,
} }
} }
@ -229,6 +233,7 @@ where
srv: self.srv, srv: self.srv,
expect: self.expect, expect: self.expect,
on_connect: self.on_connect, on_connect: self.on_connect,
on_connect_ext: self.on_connect_ext,
_t: PhantomData, _t: PhantomData,
} }
} }
@ -241,6 +246,12 @@ where
self.on_connect = f; self.on_connect = f;
self self
} }
/// Set on connect callback.
pub(crate) fn on_connect_ext(mut self, f: Option<Rc<ConnectCallback<T>>>) -> Self {
self.on_connect_ext = f;
self
}
} }
impl<T, S, B, X, U> ServiceFactory for H1Service<T, S, B, X, U> impl<T, S, B, X, U> ServiceFactory for H1Service<T, S, B, X, U>
@ -274,6 +285,7 @@ where
expect: None, expect: None,
upgrade: None, upgrade: None,
on_connect: self.on_connect.clone(), on_connect: self.on_connect.clone(),
on_connect_ext: self.on_connect_ext.clone(),
cfg: Some(self.cfg.clone()), cfg: Some(self.cfg.clone()),
_t: PhantomData, _t: PhantomData,
} }
@ -303,6 +315,7 @@ where
expect: Option<X::Service>, expect: Option<X::Service>,
upgrade: Option<U::Service>, upgrade: Option<U::Service>,
on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
cfg: Option<ServiceConfig>, cfg: Option<ServiceConfig>,
_t: PhantomData<(T, B)>, _t: PhantomData<(T, B)>,
} }
@ -352,23 +365,26 @@ where
Poll::Ready(result.map(|service| { Poll::Ready(result.map(|service| {
let this = self.as_mut().project(); let this = self.as_mut().project();
H1ServiceHandler::new( H1ServiceHandler::new(
this.cfg.take().unwrap(), this.cfg.take().unwrap(),
service, service,
this.expect.take().unwrap(), this.expect.take().unwrap(),
this.upgrade.take(), this.upgrade.take(),
this.on_connect.clone(), this.on_connect.clone(),
this.on_connect_ext.clone(),
) )
})) }))
} }
} }
/// `Service` implementation for HTTP1 transport /// `Service` implementation for HTTP/1 transport
pub struct H1ServiceHandler<T, S: Service, B, X: Service, U: Service> { pub struct H1ServiceHandler<T, S: Service, B, X: Service, U: Service> {
srv: CloneableService<S>, srv: CloneableService<S>,
expect: CloneableService<X>, expect: CloneableService<X>,
upgrade: Option<CloneableService<U>>, upgrade: Option<CloneableService<U>>,
on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
cfg: ServiceConfig, cfg: ServiceConfig,
_t: PhantomData<(T, B)>, _t: PhantomData<(T, B)>,
} }
@ -390,6 +406,7 @@ where
expect: X, expect: X,
upgrade: Option<U>, upgrade: Option<U>,
on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
) -> H1ServiceHandler<T, S, B, X, U> { ) -> H1ServiceHandler<T, S, B, X, U> {
H1ServiceHandler { H1ServiceHandler {
srv: CloneableService::new(srv), srv: CloneableService::new(srv),
@ -397,6 +414,7 @@ where
upgrade: upgrade.map(CloneableService::new), upgrade: upgrade.map(CloneableService::new),
cfg, cfg,
on_connect, on_connect,
on_connect_ext,
_t: PhantomData, _t: PhantomData,
} }
} }
@ -462,11 +480,13 @@ where
} }
fn call(&mut self, (io, addr): Self::Request) -> Self::Future { fn call(&mut self, (io, addr): Self::Request) -> Self::Future {
let on_connect = if let Some(ref on_connect) = self.on_connect { let deprecated_on_connect = self.on_connect.as_ref().map(|handler| handler(&io));
Some(on_connect(&io))
} else { let mut connect_extensions = Extensions::new();
None if let Some(ref handler) = self.on_connect_ext {
}; // run on_connect_ext callback, populating connect extensions
handler(&io, &mut connect_extensions);
}
Dispatcher::new( Dispatcher::new(
io, io,
@ -474,7 +494,8 @@ where
self.srv.clone(), self.srv.clone(),
self.expect.clone(), self.expect.clone(),
self.upgrade.clone(), self.upgrade.clone(),
on_connect, deprecated_on_connect,
connect_extensions,
addr, addr,
) )
} }
@ -548,10 +569,12 @@ where
} }
#[doc(hidden)] #[doc(hidden)]
#[pin_project::pin_project]
pub struct OneRequestServiceResponse<T> pub struct OneRequestServiceResponse<T>
where where
T: AsyncRead + AsyncWrite + Unpin, T: AsyncRead + AsyncWrite + Unpin,
{ {
#[pin]
framed: Option<Framed<T, Codec>>, framed: Option<Framed<T, Codec>>,
} }
@ -562,16 +585,18 @@ where
type Output = Result<(Request, Framed<T, Codec>), ParseError>; type Output = Result<(Request, Framed<T, Codec>), ParseError>;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> { fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
match self.framed.as_mut().unwrap().next_item(cx) { let this = self.as_mut().project();
Poll::Ready(Some(Ok(req))) => match req {
match ready!(this.framed.as_pin_mut().unwrap().next_item(cx)) {
Some(Ok(req)) => match req {
Message::Item(req) => { Message::Item(req) => {
Poll::Ready(Ok((req, self.framed.take().unwrap()))) let mut this = self.as_mut().project();
Poll::Ready(Ok((req, this.framed.take().unwrap())))
} }
Message::Chunk(_) => unreachable!("Something is wrong"), Message::Chunk(_) => unreachable!("Something is wrong"),
}, },
Poll::Ready(Some(Err(err))) => Poll::Ready(Err(err)), Some(Err(err)) => Poll::Ready(Err(err)),
Poll::Ready(None) => Poll::Ready(Err(ParseError::Incomplete)), None => Poll::Ready(Err(ParseError::Incomplete)),
Poll::Pending => Poll::Pending,
} }
} }
} }

View File

@ -3,13 +3,13 @@ use std::task::{Context, Poll};
use actix_codec::Framed; use actix_codec::Framed;
use actix_service::{Service, ServiceFactory}; use actix_service::{Service, ServiceFactory};
use futures_util::future::Ready; use futures_util::future::{ready, Ready};
use crate::error::Error; use crate::error::Error;
use crate::h1::Codec; use crate::h1::Codec;
use crate::request::Request; use crate::request::Request;
pub struct UpgradeHandler<T>(PhantomData<T>); pub struct UpgradeHandler<T>(pub(crate) PhantomData<T>);
impl<T> ServiceFactory for UpgradeHandler<T> { impl<T> ServiceFactory for UpgradeHandler<T> {
type Config = (); type Config = ();
@ -36,6 +36,6 @@ impl<T> Service for UpgradeHandler<T> {
} }
fn call(&mut self, _: Self::Request) -> Self::Future { fn call(&mut self, _: Self::Request) -> Self::Future {
unimplemented!() ready(Ok(()))
} }
} }

View File

@ -9,12 +9,13 @@ use crate::error::Error;
use crate::h1::{Codec, Message}; use crate::h1::{Codec, Message};
use crate::response::Response; use crate::response::Response;
/// Send http/1 response /// Send HTTP/1 response
#[pin_project::pin_project] #[pin_project::pin_project]
pub struct SendResponse<T, B> { pub struct SendResponse<T, B> {
res: Option<Message<(Response<()>, BodySize)>>, res: Option<Message<(Response<()>, BodySize)>>,
#[pin] #[pin]
body: Option<ResponseBody<B>>, body: Option<ResponseBody<B>>,
#[pin]
framed: Option<Framed<T, Codec>>, framed: Option<Framed<T, Codec>>,
} }
@ -35,23 +36,30 @@ where
impl<T, B> Future for SendResponse<T, B> impl<T, B> Future for SendResponse<T, B>
where where
T: AsyncRead + AsyncWrite, T: AsyncRead + AsyncWrite + Unpin,
B: MessageBody + Unpin, B: MessageBody + Unpin,
{ {
type Output = Result<Framed<T, Codec>, Error>; type Output = Result<Framed<T, Codec>, Error>;
// TODO: rethink if we need loops in polls // TODO: rethink if we need loops in polls
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> { fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let mut this = self.project(); let mut this = self.as_mut().project();
let mut body_done = this.body.is_none(); let mut body_done = this.body.is_none();
loop { loop {
let mut body_ready = !body_done; let mut body_ready = !body_done;
let framed = this.framed.as_mut().unwrap();
// send body // send body
if this.res.is_none() && body_ready { if this.res.is_none() && body_ready {
while body_ready && !body_done && !framed.is_write_buf_full() { while body_ready
&& !body_done
&& !this
.framed
.as_ref()
.as_pin_ref()
.unwrap()
.is_write_buf_full()
{
match this.body.as_mut().as_pin_mut().unwrap().poll_next(cx)? { match this.body.as_mut().as_pin_mut().unwrap().poll_next(cx)? {
Poll::Ready(item) => { Poll::Ready(item) => {
// body is done when item is None // body is done when item is None
@ -59,6 +67,7 @@ where
if body_done { if body_done {
let _ = this.body.take(); let _ = this.body.take();
} }
let framed = this.framed.as_mut().as_pin_mut().unwrap();
framed.write(Message::Chunk(item))?; framed.write(Message::Chunk(item))?;
} }
Poll::Pending => body_ready = false, Poll::Pending => body_ready = false,
@ -66,6 +75,8 @@ where
} }
} }
let framed = this.framed.as_mut().as_pin_mut().unwrap();
// flush write buffer // flush write buffer
if !framed.is_write_buf_empty() { if !framed.is_write_buf_empty() {
match framed.flush(cx)? { match framed.flush(cx)? {
@ -96,6 +107,9 @@ where
break; break;
} }
} }
Poll::Ready(Ok(this.framed.take().unwrap()))
let framed = this.framed.take().unwrap();
Poll::Ready(Ok(framed))
} }
} }

View File

@ -24,6 +24,7 @@ use crate::message::ResponseHead;
use crate::payload::Payload; use crate::payload::Payload;
use crate::request::Request; use crate::request::Request;
use crate::response::Response; use crate::response::Response;
use crate::Extensions;
const CHUNK_SIZE: usize = 16_384; const CHUNK_SIZE: usize = 16_384;
@ -36,6 +37,7 @@ where
service: CloneableService<S>, service: CloneableService<S>,
connection: Connection<T, Bytes>, connection: Connection<T, Bytes>,
on_connect: Option<Box<dyn DataFactory>>, on_connect: Option<Box<dyn DataFactory>>,
on_connect_data: Extensions,
config: ServiceConfig, config: ServiceConfig,
peer_addr: Option<net::SocketAddr>, peer_addr: Option<net::SocketAddr>,
ka_expire: Instant, ka_expire: Instant,
@ -56,6 +58,7 @@ where
service: CloneableService<S>, service: CloneableService<S>,
connection: Connection<T, Bytes>, connection: Connection<T, Bytes>,
on_connect: Option<Box<dyn DataFactory>>, on_connect: Option<Box<dyn DataFactory>>,
on_connect_data: Extensions,
config: ServiceConfig, config: ServiceConfig,
timeout: Option<Delay>, timeout: Option<Delay>,
peer_addr: Option<net::SocketAddr>, peer_addr: Option<net::SocketAddr>,
@ -82,6 +85,7 @@ where
peer_addr, peer_addr,
connection, connection,
on_connect, on_connect,
on_connect_data,
ka_expire, ka_expire,
ka_timer, ka_timer,
_t: PhantomData, _t: PhantomData,
@ -130,11 +134,15 @@ where
head.headers = parts.headers.into(); head.headers = parts.headers.into();
head.peer_addr = this.peer_addr; head.peer_addr = this.peer_addr;
// DEPRECATED
// set on_connect data // set on_connect data
if let Some(ref on_connect) = this.on_connect { if let Some(ref on_connect) = this.on_connect {
on_connect.set(&mut req.extensions_mut()); on_connect.set(&mut req.extensions_mut());
} }
// merge on_connect_ext data into request extensions
req.extensions_mut().drain_from(&mut this.on_connect_data);
actix_rt::spawn(ServiceResponse::< actix_rt::spawn(ServiceResponse::<
S::Future, S::Future,
S::Response, S::Response,
@ -227,9 +235,11 @@ where
if !has_date { if !has_date {
let mut bytes = BytesMut::with_capacity(29); let mut bytes = BytesMut::with_capacity(29);
self.config.set_date_header(&mut bytes); self.config.set_date_header(&mut bytes);
res.headers_mut().insert(DATE, unsafe { res.headers_mut().insert(
HeaderValue::from_maybe_shared_unchecked(bytes.freeze()) DATE,
}); // SAFETY: serialized date-times are known ASCII strings
unsafe { HeaderValue::from_maybe_shared_unchecked(bytes.freeze()) },
);
} }
res res

View File

@ -2,7 +2,7 @@ use std::future::Future;
use std::marker::PhantomData; use std::marker::PhantomData;
use std::pin::Pin; use std::pin::Pin;
use std::task::{Context, Poll}; use std::task::{Context, Poll};
use std::{net, rc}; use std::{net, rc::Rc};
use actix_codec::{AsyncRead, AsyncWrite}; use actix_codec::{AsyncRead, AsyncWrite};
use actix_rt::net::TcpStream; use actix_rt::net::TcpStream;
@ -23,6 +23,7 @@ use crate::error::{DispatchError, Error};
use crate::helpers::DataFactory; use crate::helpers::DataFactory;
use crate::request::Request; use crate::request::Request;
use crate::response::Response; use crate::response::Response;
use crate::{ConnectCallback, Extensions};
use super::dispatcher::Dispatcher; use super::dispatcher::Dispatcher;
@ -30,7 +31,8 @@ use super::dispatcher::Dispatcher;
pub struct H2Service<T, S, B> { pub struct H2Service<T, S, B> {
srv: S, srv: S,
cfg: ServiceConfig, cfg: ServiceConfig,
on_connect: Option<rc::Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
_t: PhantomData<(T, B)>, _t: PhantomData<(T, B)>,
} }
@ -50,19 +52,27 @@ where
H2Service { H2Service {
cfg, cfg,
on_connect: None, on_connect: None,
on_connect_ext: None,
srv: service.into_factory(), srv: service.into_factory(),
_t: PhantomData, _t: PhantomData,
} }
} }
/// Set on connect callback. /// Set on connect callback.
pub(crate) fn on_connect( pub(crate) fn on_connect(
mut self, mut self,
f: Option<rc::Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, f: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
) -> Self { ) -> Self {
self.on_connect = f; self.on_connect = f;
self self
} }
/// Set on connect callback.
pub(crate) fn on_connect_ext(mut self, f: Option<Rc<ConnectCallback<T>>>) -> Self {
self.on_connect_ext = f;
self
}
} }
impl<S, B> H2Service<TcpStream, S, B> impl<S, B> H2Service<TcpStream, S, B>
@ -97,7 +107,7 @@ where
mod openssl { mod openssl {
use actix_service::{fn_factory, fn_service}; use actix_service::{fn_factory, fn_service};
use actix_tls::openssl::{Acceptor, SslAcceptor, SslStream}; use actix_tls::openssl::{Acceptor, SslAcceptor, SslStream};
use actix_tls::{openssl::HandshakeError, SslError}; use actix_tls::{openssl::HandshakeError, TlsError};
use super::*; use super::*;
@ -117,12 +127,12 @@ mod openssl {
Config = (), Config = (),
Request = TcpStream, Request = TcpStream,
Response = (), Response = (),
Error = SslError<HandshakeError<TcpStream>, DispatchError>, Error = TlsError<HandshakeError<TcpStream>, DispatchError>,
InitError = S::InitError, InitError = S::InitError,
> { > {
pipeline_factory( pipeline_factory(
Acceptor::new(acceptor) Acceptor::new(acceptor)
.map_err(SslError::Ssl) .map_err(TlsError::Tls)
.map_init_err(|_| panic!()), .map_init_err(|_| panic!()),
) )
.and_then(fn_factory(|| { .and_then(fn_factory(|| {
@ -131,7 +141,7 @@ mod openssl {
ok((io, peer_addr)) ok((io, peer_addr))
})) }))
})) }))
.and_then(self.map_err(SslError::Service)) .and_then(self.map_err(TlsError::Service))
} }
} }
} }
@ -140,7 +150,7 @@ mod openssl {
mod rustls { mod rustls {
use super::*; use super::*;
use actix_tls::rustls::{Acceptor, ServerConfig, TlsStream}; use actix_tls::rustls::{Acceptor, ServerConfig, TlsStream};
use actix_tls::SslError; use actix_tls::TlsError;
use std::io; use std::io;
impl<S, B> H2Service<TlsStream<TcpStream>, S, B> impl<S, B> H2Service<TlsStream<TcpStream>, S, B>
@ -159,7 +169,7 @@ mod rustls {
Config = (), Config = (),
Request = TcpStream, Request = TcpStream,
Response = (), Response = (),
Error = SslError<io::Error, DispatchError>, Error = TlsError<io::Error, DispatchError>,
InitError = S::InitError, InitError = S::InitError,
> { > {
let protos = vec!["h2".to_string().into()]; let protos = vec!["h2".to_string().into()];
@ -167,7 +177,7 @@ mod rustls {
pipeline_factory( pipeline_factory(
Acceptor::new(config) Acceptor::new(config)
.map_err(SslError::Ssl) .map_err(TlsError::Tls)
.map_init_err(|_| panic!()), .map_init_err(|_| panic!()),
) )
.and_then(fn_factory(|| { .and_then(fn_factory(|| {
@ -176,7 +186,7 @@ mod rustls {
ok((io, peer_addr)) ok((io, peer_addr))
})) }))
})) }))
.and_then(self.map_err(SslError::Service)) .and_then(self.map_err(TlsError::Service))
} }
} }
} }
@ -203,6 +213,7 @@ where
fut: self.srv.new_service(()), fut: self.srv.new_service(()),
cfg: Some(self.cfg.clone()), cfg: Some(self.cfg.clone()),
on_connect: self.on_connect.clone(), on_connect: self.on_connect.clone(),
on_connect_ext: self.on_connect_ext.clone(),
_t: PhantomData, _t: PhantomData,
} }
} }
@ -214,7 +225,8 @@ pub struct H2ServiceResponse<T, S: ServiceFactory, B> {
#[pin] #[pin]
fut: S::Future, fut: S::Future,
cfg: Option<ServiceConfig>, cfg: Option<ServiceConfig>,
on_connect: Option<rc::Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
_t: PhantomData<(T, B)>, _t: PhantomData<(T, B)>,
} }
@ -237,6 +249,7 @@ where
H2ServiceHandler::new( H2ServiceHandler::new(
this.cfg.take().unwrap(), this.cfg.take().unwrap(),
this.on_connect.clone(), this.on_connect.clone(),
this.on_connect_ext.clone(),
service, service,
) )
})) }))
@ -247,7 +260,8 @@ where
pub struct H2ServiceHandler<T, S: Service, B> { pub struct H2ServiceHandler<T, S: Service, B> {
srv: CloneableService<S>, srv: CloneableService<S>,
cfg: ServiceConfig, cfg: ServiceConfig,
on_connect: Option<rc::Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
_t: PhantomData<(T, B)>, _t: PhantomData<(T, B)>,
} }
@ -261,12 +275,14 @@ where
{ {
fn new( fn new(
cfg: ServiceConfig, cfg: ServiceConfig,
on_connect: Option<rc::Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
srv: S, srv: S,
) -> H2ServiceHandler<T, S, B> { ) -> H2ServiceHandler<T, S, B> {
H2ServiceHandler { H2ServiceHandler {
cfg, cfg,
on_connect, on_connect,
on_connect_ext,
srv: CloneableService::new(srv), srv: CloneableService::new(srv),
_t: PhantomData, _t: PhantomData,
} }
@ -296,18 +312,21 @@ where
} }
fn call(&mut self, (io, addr): Self::Request) -> Self::Future { fn call(&mut self, (io, addr): Self::Request) -> Self::Future {
let on_connect = if let Some(ref on_connect) = self.on_connect { let deprecated_on_connect = self.on_connect.as_ref().map(|handler| handler(&io));
Some(on_connect(&io))
} else { let mut connect_extensions = Extensions::new();
None if let Some(ref handler) = self.on_connect_ext {
}; // run on_connect_ext callback, populating connect extensions
handler(&io, &mut connect_extensions);
}
H2ServiceHandlerResponse { H2ServiceHandlerResponse {
state: State::Handshake( state: State::Handshake(
Some(self.srv.clone()), Some(self.srv.clone()),
Some(self.cfg.clone()), Some(self.cfg.clone()),
addr, addr,
on_connect, deprecated_on_connect,
Some(connect_extensions),
server::handshake(io), server::handshake(io),
), ),
} }
@ -325,6 +344,7 @@ where
Option<ServiceConfig>, Option<ServiceConfig>,
Option<net::SocketAddr>, Option<net::SocketAddr>,
Option<Box<dyn DataFactory>>, Option<Box<dyn DataFactory>>,
Option<Extensions>,
Handshake<T, Bytes>, Handshake<T, Bytes>,
), ),
} }
@ -360,6 +380,7 @@ where
ref mut config, ref mut config,
ref peer_addr, ref peer_addr,
ref mut on_connect, ref mut on_connect,
ref mut on_connect_data,
ref mut handshake, ref mut handshake,
) => match Pin::new(handshake).poll(cx) { ) => match Pin::new(handshake).poll(cx) {
Poll::Ready(Ok(conn)) => { Poll::Ready(Ok(conn)) => {
@ -367,6 +388,7 @@ where
srv.take().unwrap(), srv.take().unwrap(),
conn, conn,
on_connect.take(), on_connect.take(),
on_connect_data.take().unwrap(),
config.take().unwrap(), config.take().unwrap(),
None, None,
*peer_addr, *peer_addr,

View File

@ -1,3 +1,5 @@
use std::cmp::Ordering;
use mime::Mime; use mime::Mime;
use crate::header::{qitem, QualityItem}; use crate::header::{qitem, QualityItem};
@ -7,7 +9,7 @@ header! {
/// `Accept` header, defined in [RFC7231](http://tools.ietf.org/html/rfc7231#section-5.3.2) /// `Accept` header, defined in [RFC7231](http://tools.ietf.org/html/rfc7231#section-5.3.2)
/// ///
/// The `Accept` header field can be used by user agents to specify /// The `Accept` header field can be used by user agents to specify
/// response media types that are acceptable. Accept header fields can /// response media types that are acceptable. Accept header fields can
/// be used to indicate that the request is specifically limited to a /// be used to indicate that the request is specifically limited to a
/// small set of desired types, as in the case of a request for an /// small set of desired types, as in the case of a request for an
/// in-line image /// in-line image
@ -97,14 +99,14 @@ header! {
test_header!( test_header!(
test1, test1,
vec![b"audio/*; q=0.2, audio/basic"], vec![b"audio/*; q=0.2, audio/basic"],
Some(HeaderField(vec![ Some(Accept(vec![
QualityItem::new("audio/*".parse().unwrap(), q(200)), QualityItem::new("audio/*".parse().unwrap(), q(200)),
qitem("audio/basic".parse().unwrap()), qitem("audio/basic".parse().unwrap()),
]))); ])));
test_header!( test_header!(
test2, test2,
vec![b"text/plain; q=0.5, text/html, text/x-dvi; q=0.8, text/x-c"], vec![b"text/plain; q=0.5, text/html, text/x-dvi; q=0.8, text/x-c"],
Some(HeaderField(vec![ Some(Accept(vec![
QualityItem::new(mime::TEXT_PLAIN, q(500)), QualityItem::new(mime::TEXT_PLAIN, q(500)),
qitem(mime::TEXT_HTML), qitem(mime::TEXT_HTML),
QualityItem::new( QualityItem::new(
@ -138,23 +140,148 @@ header! {
} }
impl Accept { impl Accept {
/// A constructor to easily create `Accept: */*`. /// Construct `Accept: */*`.
pub fn star() -> Accept { pub fn star() -> Accept {
Accept(vec![qitem(mime::STAR_STAR)]) Accept(vec![qitem(mime::STAR_STAR)])
} }
/// A constructor to easily create `Accept: application/json`. /// Construct `Accept: application/json`.
pub fn json() -> Accept { pub fn json() -> Accept {
Accept(vec![qitem(mime::APPLICATION_JSON)]) Accept(vec![qitem(mime::APPLICATION_JSON)])
} }
/// A constructor to easily create `Accept: text/*`. /// Construct `Accept: text/*`.
pub fn text() -> Accept { pub fn text() -> Accept {
Accept(vec![qitem(mime::TEXT_STAR)]) Accept(vec![qitem(mime::TEXT_STAR)])
} }
/// A constructor to easily create `Accept: image/*`. /// Construct `Accept: image/*`.
pub fn image() -> Accept { pub fn image() -> Accept {
Accept(vec![qitem(mime::IMAGE_STAR)]) Accept(vec![qitem(mime::IMAGE_STAR)])
} }
/// Construct `Accept: text/html`.
pub fn html() -> Accept {
Accept(vec![qitem(mime::TEXT_HTML)])
}
/// Returns a sorted list of mime types from highest to lowest preference, accounting for
/// [q-factor weighting] and specificity.
///
/// [q-factor weighting]: https://tools.ietf.org/html/rfc7231#section-5.3.2
pub fn mime_precedence(&self) -> Vec<Mime> {
let mut types = self.0.clone();
// use stable sort so items with equal q-factor and specificity retain listed order
types.sort_by(|a, b| {
// sort by q-factor descending
b.quality.cmp(&a.quality).then_with(|| {
// use specificity rules on mime types with
// same q-factor (eg. text/html > text/* > */*)
// subtypes are not comparable if main type is star, so return
match (a.item.type_(), b.item.type_()) {
(mime::STAR, mime::STAR) => return Ordering::Equal,
// a is sorted after b
(mime::STAR, _) => return Ordering::Greater,
// a is sorted before b
(_, mime::STAR) => return Ordering::Less,
_ => {}
}
// in both these match expressions, the returned ordering appears
// inverted because sort is high-to-low ("descending") precedence
match (a.item.subtype(), b.item.subtype()) {
(mime::STAR, mime::STAR) => Ordering::Equal,
// a is sorted after b
(mime::STAR, _) => Ordering::Greater,
// a is sorted before b
(_, mime::STAR) => Ordering::Less,
_ => Ordering::Equal,
}
})
});
types.into_iter().map(|qitem| qitem.item).collect()
}
/// Extracts the most preferable mime type, accounting for [q-factor weighting].
///
/// If no q-factors are provided, the first mime type is chosen. Note that items without
/// q-factors are given the maximum preference value.
///
/// Returns `None` if contained list is empty.
///
/// [q-factor weighting]: https://tools.ietf.org/html/rfc7231#section-5.3.2
pub fn mime_preference(&self) -> Option<Mime> {
let types = self.mime_precedence();
types.first().cloned()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::header::q;
#[test]
fn test_mime_precedence() {
let test = Accept(vec![]);
assert!(test.mime_precedence().is_empty());
let test = Accept(vec![qitem(mime::APPLICATION_JSON)]);
assert_eq!(test.mime_precedence(), vec!(mime::APPLICATION_JSON));
let test = Accept(vec![
qitem(mime::TEXT_HTML),
"application/xhtml+xml".parse().unwrap(),
QualityItem::new("application/xml".parse().unwrap(), q(0.9)),
QualityItem::new(mime::STAR_STAR, q(0.8)),
]);
assert_eq!(
test.mime_precedence(),
vec![
mime::TEXT_HTML,
"application/xhtml+xml".parse().unwrap(),
"application/xml".parse().unwrap(),
mime::STAR_STAR,
]
);
let test = Accept(vec![
qitem(mime::STAR_STAR),
qitem(mime::IMAGE_STAR),
qitem(mime::IMAGE_PNG),
]);
assert_eq!(
test.mime_precedence(),
vec![mime::IMAGE_PNG, mime::IMAGE_STAR, mime::STAR_STAR]
);
}
#[test]
fn test_mime_preference() {
let test = Accept(vec![
qitem(mime::TEXT_HTML),
"application/xhtml+xml".parse().unwrap(),
QualityItem::new("application/xml".parse().unwrap(), q(0.9)),
QualityItem::new(mime::STAR_STAR, q(0.8)),
]);
assert_eq!(test.mime_preference(), Some(mime::TEXT_HTML));
let test = Accept(vec![
QualityItem::new("video/*".parse().unwrap(), q(0.8)),
qitem(mime::IMAGE_PNG),
QualityItem::new(mime::STAR_STAR, q(0.5)),
qitem(mime::IMAGE_SVG),
QualityItem::new(mime::IMAGE_STAR, q(0.8)),
]);
assert_eq!(test.mime_preference(), Some(mime::IMAGE_PNG));
}
} }

View File

@ -283,11 +283,11 @@ impl DispositionParam {
/// Some("\u{1f600}.svg".as_bytes())); /// Some("\u{1f600}.svg".as_bytes()));
/// ``` /// ```
/// ///
/// # WARN /// # Security Note
///
/// If "filename" parameter is supplied, do not use the file name blindly, check and possibly /// If "filename" parameter is supplied, do not use the file name blindly, check and possibly
/// change to match local file system conventions if applicable, and do not use directory path /// change to match local file system conventions if applicable, and do not use directory path
/// information that may be present. See [RFC2183](https://tools.ietf.org/html/rfc2183#section-2.3) /// information that may be present. See [RFC2183](https://tools.ietf.org/html/rfc2183#section-2.3).
/// .
#[derive(Clone, Debug, PartialEq)] #[derive(Clone, Debug, PartialEq)]
pub struct ContentDisposition { pub struct ContentDisposition {
/// The disposition type /// The disposition type
@ -550,8 +550,7 @@ impl fmt::Display for ContentDisposition {
write!(f, "{}", self.disposition)?; write!(f, "{}", self.disposition)?;
self.parameters self.parameters
.iter() .iter()
.map(|param| write!(f, "; {}", param)) .try_for_each(|param| write!(f, "; {}", param))
.collect()
} }
} }

View File

@ -3,7 +3,7 @@
//! ## Mime //! ## Mime
//! //!
//! Several header fields use MIME values for their contents. Keeping with the //! Several header fields use MIME values for their contents. Keeping with the
//! strongly-typed theme, the [mime](https://docs.rs/mime) crate //! strongly-typed theme, the [mime] crate
//! is used, such as `ContentType(pub Mime)`. //! is used, such as `ContentType(pub Mime)`.
#![cfg_attr(rustfmt, rustfmt_skip)] #![cfg_attr(rustfmt, rustfmt_skip)]

View File

@ -8,8 +8,6 @@ use http::header::{HeaderName, HeaderValue};
/// A set of HTTP headers /// A set of HTTP headers
/// ///
/// `HeaderMap` is an multi-map of [`HeaderName`] to values. /// `HeaderMap` is an multi-map of [`HeaderName`] to values.
///
/// [`HeaderName`]: struct.HeaderName.html
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub struct HeaderMap { pub struct HeaderMap {
pub(crate) inner: FxHashMap<HeaderName, Value>, pub(crate) inner: FxHashMap<HeaderName, Value>,
@ -141,8 +139,6 @@ impl HeaderMap {
/// The returned view does not incur any allocations and allows iterating /// The returned view does not incur any allocations and allows iterating
/// the values associated with the key. See [`GetAll`] for more details. /// the values associated with the key. See [`GetAll`] for more details.
/// Returns `None` if there are no values associated with the key. /// Returns `None` if there are no values associated with the key.
///
/// [`GetAll`]: struct.GetAll.html
pub fn get_all<N: AsName>(&self, name: N) -> GetAll<'_> { pub fn get_all<N: AsName>(&self, name: N) -> GetAll<'_> {
GetAll { GetAll {
idx: 0, idx: 0,

View File

@ -370,9 +370,7 @@ impl fmt::Display for ExtendedValue {
} }
/// Percent encode a sequence of bytes with a character set defined in /// Percent encode a sequence of bytes with a character set defined in
/// [https://tools.ietf.org/html/rfc5987#section-3.2][url] /// <https://tools.ietf.org/html/rfc5987#section-3.2>
///
/// [url]: https://tools.ietf.org/html/rfc5987#section-3.2
pub fn http_percent_encode(f: &mut fmt::Formatter<'_>, bytes: &[u8]) -> fmt::Result { pub fn http_percent_encode(f: &mut fmt::Formatter<'_>, bytes: &[u8]) -> fmt::Result {
let encoded = percent_encoding::percent_encode(bytes, HTTP_VALUE); let encoded = percent_encoding::percent_encode(bytes, HTTP_VALUE);
fmt::Display::fmt(&encoded, f) fmt::Display::fmt(&encoded, f)

View File

@ -7,9 +7,7 @@ use self::Charset::*;
/// ///
/// The string representation is normalized to upper case. /// The string representation is normalized to upper case.
/// ///
/// See [http://www.iana.org/assignments/character-sets/character-sets.xhtml][url]. /// See <http://www.iana.org/assignments/character-sets/character-sets.xhtml>.
///
/// [url]: http://www.iana.org/assignments/character-sets/character-sets.xhtml
#[derive(Clone, Debug, PartialEq)] #[derive(Clone, Debug, PartialEq)]
#[allow(non_camel_case_types)] #[allow(non_camel_case_types)]
pub enum Charset { pub enum Charset {

View File

@ -7,10 +7,12 @@ use crate::header::{HeaderValue, IntoHeaderValue, InvalidHeaderValue, Writer};
/// 1. `%x21`, or /// 1. `%x21`, or
/// 2. in the range `%x23` to `%x7E`, or /// 2. in the range `%x23` to `%x7E`, or
/// 3. above `%x80` /// 3. above `%x80`
fn entity_validate_char(c: u8) -> bool {
c == 0x21 || (0x23..=0x7e).contains(&c) || (c >= 0x80)
}
fn check_slice_validity(slice: &str) -> bool { fn check_slice_validity(slice: &str) -> bool {
slice slice.bytes().all(entity_validate_char)
.bytes()
.all(|c| c == b'\x21' || (c >= b'\x23' && c <= b'\x7e') | (c >= b'\x80'))
} }
/// An entity tag, defined in [RFC7232](https://tools.ietf.org/html/rfc7232#section-2.3) /// An entity tag, defined in [RFC7232](https://tools.ietf.org/html/rfc7232#section-2.3)

View File

@ -1,10 +1,17 @@
use std::{cmp, fmt, str}; use std::{
cmp,
convert::{TryFrom, TryInto},
fmt, str,
};
use self::internal::IntoQuality; use derive_more::{Display, Error};
const MAX_QUALITY: u16 = 1000;
const MAX_FLOAT_QUALITY: f32 = 1.0;
/// Represents a quality used in quality values. /// Represents a quality used in quality values.
/// ///
/// Can be created with the `q` function. /// Can be created with the [`q`] function.
/// ///
/// # Implementation notes /// # Implementation notes
/// ///
@ -18,12 +25,54 @@ use self::internal::IntoQuality;
/// ///
/// [RFC7231 Section 5.3.1](https://tools.ietf.org/html/rfc7231#section-5.3.1) /// [RFC7231 Section 5.3.1](https://tools.ietf.org/html/rfc7231#section-5.3.1)
/// gives more information on quality values in HTTP header fields. /// gives more information on quality values in HTTP header fields.
#[derive(Copy, Clone, Debug, Eq, Ord, PartialEq, PartialOrd)] #[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub struct Quality(u16); pub struct Quality(u16);
impl Quality {
/// # Panics
/// Panics in debug mode when value is not in the range 0.0 <= n <= 1.0.
fn from_f32(value: f32) -> Self {
// Check that `value` is within range should be done before calling this method.
// Just in case, this debug_assert should catch if we were forgetful.
debug_assert!(
(0.0f32..=1.0f32).contains(&value),
"q value must be between 0.0 and 1.0"
);
Quality((value * MAX_QUALITY as f32) as u16)
}
}
impl Default for Quality { impl Default for Quality {
fn default() -> Quality { fn default() -> Quality {
Quality(1000) Quality(MAX_QUALITY)
}
}
#[derive(Debug, Clone, Display, Error)]
pub struct QualityOutOfBounds;
impl TryFrom<u16> for Quality {
type Error = QualityOutOfBounds;
fn try_from(value: u16) -> Result<Self, Self::Error> {
if (0..=MAX_QUALITY).contains(&value) {
Ok(Quality(value))
} else {
Err(QualityOutOfBounds)
}
}
}
impl TryFrom<f32> for Quality {
type Error = QualityOutOfBounds;
fn try_from(value: f32) -> Result<Self, Self::Error> {
if (0.0..=MAX_FLOAT_QUALITY).contains(&value) {
Ok(Quality::from_f32(value))
} else {
Err(QualityOutOfBounds)
}
} }
} }
@ -55,8 +104,9 @@ impl<T: PartialEq> cmp::PartialOrd for QualityItem<T> {
impl<T: fmt::Display> fmt::Display for QualityItem<T> { impl<T: fmt::Display> fmt::Display for QualityItem<T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt::Display::fmt(&self.item, f)?; fmt::Display::fmt(&self.item, f)?;
match self.quality.0 { match self.quality.0 {
1000 => Ok(()), MAX_QUALITY => Ok(()),
0 => f.write_str("; q=0"), 0 => f.write_str("; q=0"),
x => write!(f, "; q=0.{}", format!("{:03}", x).trim_end_matches('0')), x => write!(f, "; q=0.{}", format!("{:03}", x).trim_end_matches('0')),
} }
@ -66,105 +116,79 @@ impl<T: fmt::Display> fmt::Display for QualityItem<T> {
impl<T: str::FromStr> str::FromStr for QualityItem<T> { impl<T: str::FromStr> str::FromStr for QualityItem<T> {
type Err = crate::error::ParseError; type Err = crate::error::ParseError;
fn from_str(s: &str) -> Result<QualityItem<T>, crate::error::ParseError> { fn from_str(qitem_str: &str) -> Result<QualityItem<T>, crate::error::ParseError> {
if !s.is_ascii() { if !qitem_str.is_ascii() {
return Err(crate::error::ParseError::Header); return Err(crate::error::ParseError::Header);
} }
// Set defaults used if parsing fails. // Set defaults used if parsing fails.
let mut raw_item = s; let mut raw_item = qitem_str;
let mut quality = 1f32; let mut quality = 1f32;
let parts: Vec<&str> = s.rsplitn(2, ';').map(|x| x.trim()).collect(); let parts: Vec<_> = qitem_str.rsplitn(2, ';').map(str::trim).collect();
if parts.len() == 2 { if parts.len() == 2 {
// example for item with q-factor:
//
// gzip; q=0.65
// ^^^^^^ parts[0]
// ^^ start
// ^^^^ q_val
// ^^^^ parts[1]
if parts[0].len() < 2 { if parts[0].len() < 2 {
// Can't possibly be an attribute since an attribute needs at least a name followed
// by an equals sign. And bare identifiers are forbidden.
return Err(crate::error::ParseError::Header); return Err(crate::error::ParseError::Header);
} }
let start = &parts[0][0..2]; let start = &parts[0][0..2];
if start == "q=" || start == "Q=" { if start == "q=" || start == "Q=" {
let q_part = &parts[0][2..parts[0].len()]; let q_val = &parts[0][2..];
if q_part.len() > 5 { if q_val.len() > 5 {
// longer than 5 indicates an over-precise q-factor
return Err(crate::error::ParseError::Header); return Err(crate::error::ParseError::Header);
} }
match q_part.parse::<f32>() {
Ok(q_value) => { let q_value = q_val
if 0f32 <= q_value && q_value <= 1f32 { .parse::<f32>()
quality = q_value; .map_err(|_| crate::error::ParseError::Header)?;
raw_item = parts[1];
} else { if (0f32..=1f32).contains(&q_value) {
return Err(crate::error::ParseError::Header); quality = q_value;
} raw_item = parts[1];
} } else {
Err(_) => return Err(crate::error::ParseError::Header), return Err(crate::error::ParseError::Header);
} }
} }
} }
match raw_item.parse::<T>() {
// we already checked above that the quality is within range
Ok(item) => Ok(QualityItem::new(item, from_f32(quality))),
Err(_) => Err(crate::error::ParseError::Header),
}
}
}
#[inline] let item = raw_item
fn from_f32(f: f32) -> Quality { .parse::<T>()
// this function is only used internally. A check that `f` is within range .map_err(|_| crate::error::ParseError::Header)?;
// should be done before calling this method. Just in case, this
// debug_assert should catch if we were forgetful // we already checked above that the quality is within range
debug_assert!( Ok(QualityItem::new(item, Quality::from_f32(quality)))
f >= 0f32 && f <= 1f32, }
"q value must be between 0.0 and 1.0"
);
Quality((f * 1000f32) as u16)
} }
/// Convenience function to wrap a value in a `QualityItem` /// Convenience function to wrap a value in a `QualityItem`
/// Sets `q` to the default 1.0 /// Sets `q` to the default 1.0
pub fn qitem<T>(item: T) -> QualityItem<T> { pub fn qitem<T>(item: T) -> QualityItem<T> {
QualityItem::new(item, Default::default()) QualityItem::new(item, Quality::default())
} }
/// Convenience function to create a `Quality` from a float or integer. /// Convenience function to create a `Quality` from a float or integer.
/// ///
/// Implemented for `u16` and `f32`. Panics if value is out of range. /// Implemented for `u16` and `f32`. Panics if value is out of range.
pub fn q<T: IntoQuality>(val: T) -> Quality { pub fn q<T>(val: T) -> Quality
val.into_quality() where
} T: TryInto<Quality>,
T::Error: fmt::Debug,
mod internal { {
use super::Quality; // TODO: on next breaking change, handle unwrap differently
val.try_into().unwrap()
// TryFrom is probably better, but it's not stable. For now, we want to
// keep the functionality of the `q` function, while allowing it to be
// generic over `f32` and `u16`.
//
// `q` would panic before, so keep that behavior. `TryFrom` can be
// introduced later for a non-panicking conversion.
pub trait IntoQuality: Sealed + Sized {
fn into_quality(self) -> Quality;
}
impl IntoQuality for f32 {
fn into_quality(self) -> Quality {
assert!(
self >= 0f32 && self <= 1f32,
"float must be between 0.0 and 1.0"
);
super::from_f32(self)
}
}
impl IntoQuality for u16 {
fn into_quality(self) -> Quality {
assert!(self <= 1000, "u16 must be between 0 and 1000");
Quality(self)
}
}
pub trait Sealed {}
impl Sealed for u16 {}
impl Sealed for f32 {}
} }
#[cfg(test)] #[cfg(test)]
@ -270,15 +294,13 @@ mod tests {
} }
#[test] #[test]
#[should_panic] // FIXME - 32-bit msvc unwinding broken #[should_panic]
#[cfg_attr(all(target_arch = "x86", target_env = "msvc"), ignore)]
fn test_quality_invalid() { fn test_quality_invalid() {
q(-1.0); q(-1.0);
} }
#[test] #[test]
#[should_panic] // FIXME - 32-bit msvc unwinding broken #[should_panic]
#[cfg_attr(all(target_arch = "x86", target_env = "msvc"), ignore)]
fn test_quality_invalid2() { fn test_quality_invalid2() {
q(2.0); q(2.0);
} }

View File

@ -50,6 +50,7 @@ impl<'a> io::Write for Writer<'a> {
self.0.extend_from_slice(buf); self.0.extend_from_slice(buf);
Ok(buf.len()) Ok(buf.len())
} }
fn flush(&mut self) -> io::Result<()> { fn flush(&mut self) -> io::Result<()> {
Ok(()) Ok(())
} }

View File

@ -1,10 +1,12 @@
//! Basic http responses //! Status code based HTTP response builders.
#![allow(non_upper_case_globals)] #![allow(non_upper_case_globals)]
use http::StatusCode; use http::StatusCode;
use crate::response::{Response, ResponseBuilder}; use crate::response::{Response, ResponseBuilder};
macro_rules! STATIC_RESP { macro_rules! static_resp {
($name:ident, $status:expr) => { ($name:ident, $status:expr) => {
#[allow(non_snake_case, missing_docs)] #[allow(non_snake_case, missing_docs)]
pub fn $name() -> ResponseBuilder { pub fn $name() -> ResponseBuilder {
@ -14,63 +16,67 @@ macro_rules! STATIC_RESP {
} }
impl Response { impl Response {
STATIC_RESP!(Ok, StatusCode::OK); static_resp!(Continue, StatusCode::CONTINUE);
STATIC_RESP!(Created, StatusCode::CREATED); static_resp!(SwitchingProtocols, StatusCode::SWITCHING_PROTOCOLS);
STATIC_RESP!(Accepted, StatusCode::ACCEPTED); static_resp!(Processing, StatusCode::PROCESSING);
STATIC_RESP!(
static_resp!(Ok, StatusCode::OK);
static_resp!(Created, StatusCode::CREATED);
static_resp!(Accepted, StatusCode::ACCEPTED);
static_resp!(
NonAuthoritativeInformation, NonAuthoritativeInformation,
StatusCode::NON_AUTHORITATIVE_INFORMATION StatusCode::NON_AUTHORITATIVE_INFORMATION
); );
STATIC_RESP!(NoContent, StatusCode::NO_CONTENT); static_resp!(NoContent, StatusCode::NO_CONTENT);
STATIC_RESP!(ResetContent, StatusCode::RESET_CONTENT); static_resp!(ResetContent, StatusCode::RESET_CONTENT);
STATIC_RESP!(PartialContent, StatusCode::PARTIAL_CONTENT); static_resp!(PartialContent, StatusCode::PARTIAL_CONTENT);
STATIC_RESP!(MultiStatus, StatusCode::MULTI_STATUS); static_resp!(MultiStatus, StatusCode::MULTI_STATUS);
STATIC_RESP!(AlreadyReported, StatusCode::ALREADY_REPORTED); static_resp!(AlreadyReported, StatusCode::ALREADY_REPORTED);
STATIC_RESP!(MultipleChoices, StatusCode::MULTIPLE_CHOICES); static_resp!(MultipleChoices, StatusCode::MULTIPLE_CHOICES);
STATIC_RESP!(MovedPermanently, StatusCode::MOVED_PERMANENTLY); static_resp!(MovedPermanently, StatusCode::MOVED_PERMANENTLY);
STATIC_RESP!(Found, StatusCode::FOUND); static_resp!(Found, StatusCode::FOUND);
STATIC_RESP!(SeeOther, StatusCode::SEE_OTHER); static_resp!(SeeOther, StatusCode::SEE_OTHER);
STATIC_RESP!(NotModified, StatusCode::NOT_MODIFIED); static_resp!(NotModified, StatusCode::NOT_MODIFIED);
STATIC_RESP!(UseProxy, StatusCode::USE_PROXY); static_resp!(UseProxy, StatusCode::USE_PROXY);
STATIC_RESP!(TemporaryRedirect, StatusCode::TEMPORARY_REDIRECT); static_resp!(TemporaryRedirect, StatusCode::TEMPORARY_REDIRECT);
STATIC_RESP!(PermanentRedirect, StatusCode::PERMANENT_REDIRECT); static_resp!(PermanentRedirect, StatusCode::PERMANENT_REDIRECT);
STATIC_RESP!(BadRequest, StatusCode::BAD_REQUEST); static_resp!(BadRequest, StatusCode::BAD_REQUEST);
STATIC_RESP!(NotFound, StatusCode::NOT_FOUND); static_resp!(NotFound, StatusCode::NOT_FOUND);
STATIC_RESP!(Unauthorized, StatusCode::UNAUTHORIZED); static_resp!(Unauthorized, StatusCode::UNAUTHORIZED);
STATIC_RESP!(PaymentRequired, StatusCode::PAYMENT_REQUIRED); static_resp!(PaymentRequired, StatusCode::PAYMENT_REQUIRED);
STATIC_RESP!(Forbidden, StatusCode::FORBIDDEN); static_resp!(Forbidden, StatusCode::FORBIDDEN);
STATIC_RESP!(MethodNotAllowed, StatusCode::METHOD_NOT_ALLOWED); static_resp!(MethodNotAllowed, StatusCode::METHOD_NOT_ALLOWED);
STATIC_RESP!(NotAcceptable, StatusCode::NOT_ACCEPTABLE); static_resp!(NotAcceptable, StatusCode::NOT_ACCEPTABLE);
STATIC_RESP!( static_resp!(
ProxyAuthenticationRequired, ProxyAuthenticationRequired,
StatusCode::PROXY_AUTHENTICATION_REQUIRED StatusCode::PROXY_AUTHENTICATION_REQUIRED
); );
STATIC_RESP!(RequestTimeout, StatusCode::REQUEST_TIMEOUT); static_resp!(RequestTimeout, StatusCode::REQUEST_TIMEOUT);
STATIC_RESP!(Conflict, StatusCode::CONFLICT); static_resp!(Conflict, StatusCode::CONFLICT);
STATIC_RESP!(Gone, StatusCode::GONE); static_resp!(Gone, StatusCode::GONE);
STATIC_RESP!(LengthRequired, StatusCode::LENGTH_REQUIRED); static_resp!(LengthRequired, StatusCode::LENGTH_REQUIRED);
STATIC_RESP!(PreconditionFailed, StatusCode::PRECONDITION_FAILED); static_resp!(PreconditionFailed, StatusCode::PRECONDITION_FAILED);
STATIC_RESP!(PreconditionRequired, StatusCode::PRECONDITION_REQUIRED); static_resp!(PreconditionRequired, StatusCode::PRECONDITION_REQUIRED);
STATIC_RESP!(PayloadTooLarge, StatusCode::PAYLOAD_TOO_LARGE); static_resp!(PayloadTooLarge, StatusCode::PAYLOAD_TOO_LARGE);
STATIC_RESP!(UriTooLong, StatusCode::URI_TOO_LONG); static_resp!(UriTooLong, StatusCode::URI_TOO_LONG);
STATIC_RESP!(UnsupportedMediaType, StatusCode::UNSUPPORTED_MEDIA_TYPE); static_resp!(UnsupportedMediaType, StatusCode::UNSUPPORTED_MEDIA_TYPE);
STATIC_RESP!(RangeNotSatisfiable, StatusCode::RANGE_NOT_SATISFIABLE); static_resp!(RangeNotSatisfiable, StatusCode::RANGE_NOT_SATISFIABLE);
STATIC_RESP!(ExpectationFailed, StatusCode::EXPECTATION_FAILED); static_resp!(ExpectationFailed, StatusCode::EXPECTATION_FAILED);
STATIC_RESP!(UnprocessableEntity, StatusCode::UNPROCESSABLE_ENTITY); static_resp!(UnprocessableEntity, StatusCode::UNPROCESSABLE_ENTITY);
STATIC_RESP!(TooManyRequests, StatusCode::TOO_MANY_REQUESTS); static_resp!(TooManyRequests, StatusCode::TOO_MANY_REQUESTS);
STATIC_RESP!(InternalServerError, StatusCode::INTERNAL_SERVER_ERROR); static_resp!(InternalServerError, StatusCode::INTERNAL_SERVER_ERROR);
STATIC_RESP!(NotImplemented, StatusCode::NOT_IMPLEMENTED); static_resp!(NotImplemented, StatusCode::NOT_IMPLEMENTED);
STATIC_RESP!(BadGateway, StatusCode::BAD_GATEWAY); static_resp!(BadGateway, StatusCode::BAD_GATEWAY);
STATIC_RESP!(ServiceUnavailable, StatusCode::SERVICE_UNAVAILABLE); static_resp!(ServiceUnavailable, StatusCode::SERVICE_UNAVAILABLE);
STATIC_RESP!(GatewayTimeout, StatusCode::GATEWAY_TIMEOUT); static_resp!(GatewayTimeout, StatusCode::GATEWAY_TIMEOUT);
STATIC_RESP!(VersionNotSupported, StatusCode::HTTP_VERSION_NOT_SUPPORTED); static_resp!(VersionNotSupported, StatusCode::HTTP_VERSION_NOT_SUPPORTED);
STATIC_RESP!(VariantAlsoNegotiates, StatusCode::VARIANT_ALSO_NEGOTIATES); static_resp!(VariantAlsoNegotiates, StatusCode::VARIANT_ALSO_NEGOTIATES);
STATIC_RESP!(InsufficientStorage, StatusCode::INSUFFICIENT_STORAGE); static_resp!(InsufficientStorage, StatusCode::INSUFFICIENT_STORAGE);
STATIC_RESP!(LoopDetected, StatusCode::LOOP_DETECTED); static_resp!(LoopDetected, StatusCode::LOOP_DETECTED);
} }
#[cfg(test)] #[cfg(test)]

View File

@ -1,11 +1,15 @@
//! Basic http primitives for actix-net framework. //! HTTP primitives for the Actix ecosystem.
#![warn(rust_2018_idioms, warnings)]
#![deny(rust_2018_idioms)]
#![allow( #![allow(
clippy::type_complexity, clippy::type_complexity,
clippy::too_many_arguments, clippy::too_many_arguments,
clippy::new_without_default, clippy::new_without_default,
clippy::borrow_interior_mutable_const clippy::borrow_interior_mutable_const
)] )]
#![allow(clippy::manual_strip)] // Allow this to keep MSRV(1.42).
#![doc(html_logo_url = "https://actix.rs/img/logo.png")]
#![doc(html_favicon_url = "https://actix.rs/favicon.ico")]
#[macro_use] #[macro_use]
extern crate log; extern crate log;
@ -76,3 +80,5 @@ pub enum Protocol {
Http1, Http1,
Http2, Http2,
} }
type ConnectCallback<IO> = dyn Fn(&IO, &mut Extensions);

View File

@ -38,7 +38,7 @@ macro_rules! downcast {
/// Downcasts generic body to a specific type. /// Downcasts generic body to a specific type.
pub fn downcast_ref<T: $name + 'static>(&self) -> Option<&T> { pub fn downcast_ref<T: $name + 'static>(&self) -> Option<&T> {
if self.__private_get_type_id__().0 == std::any::TypeId::of::<T>() { if self.__private_get_type_id__().0 == std::any::TypeId::of::<T>() {
// Safety: external crates cannot override the default // SAFETY: external crates cannot override the default
// implementation of `__private_get_type_id__`, since // implementation of `__private_get_type_id__`, since
// it requires returning a private type. We can therefore // it requires returning a private type. We can therefore
// rely on the returned `TypeId`, which ensures that this // rely on the returned `TypeId`, which ensures that this
@ -48,10 +48,11 @@ macro_rules! downcast {
None None
} }
} }
/// Downcasts a generic body to a mutable specific type. /// Downcasts a generic body to a mutable specific type.
pub fn downcast_mut<T: $name + 'static>(&mut self) -> Option<&mut T> { pub fn downcast_mut<T: $name + 'static>(&mut self) -> Option<&mut T> {
if self.__private_get_type_id__().0 == std::any::TypeId::of::<T>() { if self.__private_get_type_id__().0 == std::any::TypeId::of::<T>() {
// Safety: external crates cannot override the default // SAFETY: external crates cannot override the default
// implementation of `__private_get_type_id__`, since // implementation of `__private_get_type_id__`, since
// it requires returning a private type. We can therefore // it requires returning a private type. We can therefore
// rely on the returned `TypeId`, which ensures that this // rely on the returned `TypeId`, which ensures that this
@ -86,7 +87,7 @@ mod tests {
let body = resp_body.downcast_ref::<String>().unwrap(); let body = resp_body.downcast_ref::<String>().unwrap();
assert_eq!(body, "hello cast"); assert_eq!(body, "hello cast");
let body = &mut resp_body.downcast_mut::<String>().unwrap(); let body = &mut resp_body.downcast_mut::<String>().unwrap();
body.push_str("!"); body.push('!');
let body = resp_body.downcast_ref::<String>().unwrap(); let body = resp_body.downcast_ref::<String>().unwrap();
assert_eq!(body, "hello cast!"); assert_eq!(body, "hello cast!");
let not_body = resp_body.downcast_ref::<()>(); let not_body = resp_body.downcast_ref::<()>();

View File

@ -554,8 +554,9 @@ impl ResponseBuilder {
self self
} }
/// This method calls provided closure with builder reference if value is /// This method calls provided closure with builder reference if value is `true`.
/// true. #[doc(hidden)]
#[deprecated = "Use an if statement."]
pub fn if_true<F>(&mut self, value: bool, f: F) -> &mut Self pub fn if_true<F>(&mut self, value: bool, f: F) -> &mut Self
where where
F: FnOnce(&mut ResponseBuilder), F: FnOnce(&mut ResponseBuilder),
@ -566,8 +567,9 @@ impl ResponseBuilder {
self self
} }
/// This method calls provided closure with builder reference if value is /// This method calls provided closure with builder reference if value is `Some`.
/// Some. #[doc(hidden)]
#[deprecated = "Use an if-let construction."]
pub fn if_some<T, F>(&mut self, value: Option<T>, f: F) -> &mut Self pub fn if_some<T, F>(&mut self, value: Option<T>, f: F) -> &mut Self
where where
F: FnOnce(T, &mut ResponseBuilder), F: FnOnce(T, &mut ResponseBuilder),

View File

@ -1,7 +1,7 @@
use std::marker::PhantomData; use std::marker::PhantomData;
use std::pin::Pin; use std::pin::Pin;
use std::task::{Context, Poll}; use std::task::{Context, Poll};
use std::{fmt, net, rc}; use std::{fmt, net, rc::Rc};
use actix_codec::{AsyncRead, AsyncWrite, Framed}; use actix_codec::{AsyncRead, AsyncWrite, Framed};
use actix_rt::net::TcpStream; use actix_rt::net::TcpStream;
@ -20,15 +20,17 @@ use crate::error::{DispatchError, Error};
use crate::helpers::DataFactory; use crate::helpers::DataFactory;
use crate::request::Request; use crate::request::Request;
use crate::response::Response; use crate::response::Response;
use crate::{h1, h2::Dispatcher, Protocol}; use crate::{h1, h2::Dispatcher, ConnectCallback, Extensions, Protocol};
/// `ServiceFactory` HTTP1.1/HTTP2 transport implementation /// A `ServiceFactory` for HTTP/1.1 or HTTP/2 protocol.
pub struct HttpService<T, S, B, X = h1::ExpectHandler, U = h1::UpgradeHandler<T>> { pub struct HttpService<T, S, B, X = h1::ExpectHandler, U = h1::UpgradeHandler<T>> {
srv: S, srv: S,
cfg: ServiceConfig, cfg: ServiceConfig,
expect: X, expect: X,
upgrade: Option<U>, upgrade: Option<U>,
on_connect: Option<rc::Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, // DEPRECATED: in favor of on_connect_ext
on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
_t: PhantomData<(T, B)>, _t: PhantomData<(T, B)>,
} }
@ -66,6 +68,7 @@ where
expect: h1::ExpectHandler, expect: h1::ExpectHandler,
upgrade: None, upgrade: None,
on_connect: None, on_connect: None,
on_connect_ext: None,
_t: PhantomData, _t: PhantomData,
} }
} }
@ -81,6 +84,7 @@ where
expect: h1::ExpectHandler, expect: h1::ExpectHandler,
upgrade: None, upgrade: None,
on_connect: None, on_connect: None,
on_connect_ext: None,
_t: PhantomData, _t: PhantomData,
} }
} }
@ -113,6 +117,7 @@ where
srv: self.srv, srv: self.srv,
upgrade: self.upgrade, upgrade: self.upgrade,
on_connect: self.on_connect, on_connect: self.on_connect,
on_connect_ext: self.on_connect_ext,
_t: PhantomData, _t: PhantomData,
} }
} }
@ -138,6 +143,7 @@ where
srv: self.srv, srv: self.srv,
expect: self.expect, expect: self.expect,
on_connect: self.on_connect, on_connect: self.on_connect,
on_connect_ext: self.on_connect_ext,
_t: PhantomData, _t: PhantomData,
} }
} }
@ -145,11 +151,17 @@ where
/// Set on connect callback. /// Set on connect callback.
pub(crate) fn on_connect( pub(crate) fn on_connect(
mut self, mut self,
f: Option<rc::Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, f: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
) -> Self { ) -> Self {
self.on_connect = f; self.on_connect = f;
self self
} }
/// Set connect callback with mutable access to request data container.
pub(crate) fn on_connect_ext(mut self, f: Option<Rc<ConnectCallback<T>>>) -> Self {
self.on_connect_ext = f;
self
}
} }
impl<S, B, X, U> HttpService<TcpStream, S, B, X, U> impl<S, B, X, U> HttpService<TcpStream, S, B, X, U>
@ -195,7 +207,7 @@ where
mod openssl { mod openssl {
use super::*; use super::*;
use actix_tls::openssl::{Acceptor, SslAcceptor, SslStream}; use actix_tls::openssl::{Acceptor, SslAcceptor, SslStream};
use actix_tls::{openssl::HandshakeError, SslError}; use actix_tls::{openssl::HandshakeError, TlsError};
impl<S, B, X, U> HttpService<SslStream<TcpStream>, S, B, X, U> impl<S, B, X, U> HttpService<SslStream<TcpStream>, S, B, X, U>
where where
@ -226,12 +238,12 @@ mod openssl {
Config = (), Config = (),
Request = TcpStream, Request = TcpStream,
Response = (), Response = (),
Error = SslError<HandshakeError<TcpStream>, DispatchError>, Error = TlsError<HandshakeError<TcpStream>, DispatchError>,
InitError = (), InitError = (),
> { > {
pipeline_factory( pipeline_factory(
Acceptor::new(acceptor) Acceptor::new(acceptor)
.map_err(SslError::Ssl) .map_err(TlsError::Tls)
.map_init_err(|_| panic!()), .map_init_err(|_| panic!()),
) )
.and_then(|io: SslStream<TcpStream>| { .and_then(|io: SslStream<TcpStream>| {
@ -247,7 +259,7 @@ mod openssl {
let peer_addr = io.get_ref().peer_addr().ok(); let peer_addr = io.get_ref().peer_addr().ok();
ok((io, proto, peer_addr)) ok((io, proto, peer_addr))
}) })
.and_then(self.map_err(SslError::Service)) .and_then(self.map_err(TlsError::Service))
} }
} }
} }
@ -256,7 +268,7 @@ mod openssl {
mod rustls { mod rustls {
use super::*; use super::*;
use actix_tls::rustls::{Acceptor, ServerConfig, Session, TlsStream}; use actix_tls::rustls::{Acceptor, ServerConfig, Session, TlsStream};
use actix_tls::SslError; use actix_tls::TlsError;
use std::io; use std::io;
impl<S, B, X, U> HttpService<TlsStream<TcpStream>, S, B, X, U> impl<S, B, X, U> HttpService<TlsStream<TcpStream>, S, B, X, U>
@ -288,7 +300,7 @@ mod rustls {
Config = (), Config = (),
Request = TcpStream, Request = TcpStream,
Response = (), Response = (),
Error = SslError<io::Error, DispatchError>, Error = TlsError<io::Error, DispatchError>,
InitError = (), InitError = (),
> { > {
let protos = vec!["h2".to_string().into(), "http/1.1".to_string().into()]; let protos = vec!["h2".to_string().into(), "http/1.1".to_string().into()];
@ -296,7 +308,7 @@ mod rustls {
pipeline_factory( pipeline_factory(
Acceptor::new(config) Acceptor::new(config)
.map_err(SslError::Ssl) .map_err(TlsError::Tls)
.map_init_err(|_| panic!()), .map_init_err(|_| panic!()),
) )
.and_then(|io: TlsStream<TcpStream>| { .and_then(|io: TlsStream<TcpStream>| {
@ -312,7 +324,7 @@ mod rustls {
let peer_addr = io.get_ref().0.peer_addr().ok(); let peer_addr = io.get_ref().0.peer_addr().ok();
ok((io, proto, peer_addr)) ok((io, proto, peer_addr))
}) })
.and_then(self.map_err(SslError::Service)) .and_then(self.map_err(TlsError::Service))
} }
} }
} }
@ -355,6 +367,7 @@ where
expect: None, expect: None,
upgrade: None, upgrade: None,
on_connect: self.on_connect.clone(), on_connect: self.on_connect.clone(),
on_connect_ext: self.on_connect_ext.clone(),
cfg: self.cfg.clone(), cfg: self.cfg.clone(),
_t: PhantomData, _t: PhantomData,
} }
@ -378,7 +391,8 @@ pub struct HttpServiceResponse<
fut_upg: Option<U::Future>, fut_upg: Option<U::Future>,
expect: Option<X::Service>, expect: Option<X::Service>,
upgrade: Option<U::Service>, upgrade: Option<U::Service>,
on_connect: Option<rc::Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
cfg: ServiceConfig, cfg: ServiceConfig,
_t: PhantomData<(T, B)>, _t: PhantomData<(T, B)>,
} }
@ -429,6 +443,7 @@ where
.fut .fut
.poll(cx) .poll(cx)
.map_err(|e| log::error!("Init http service error: {:?}", e))); .map_err(|e| log::error!("Init http service error: {:?}", e)));
Poll::Ready(result.map(|service| { Poll::Ready(result.map(|service| {
let this = self.as_mut().project(); let this = self.as_mut().project();
HttpServiceHandler::new( HttpServiceHandler::new(
@ -437,6 +452,7 @@ where
this.expect.take().unwrap(), this.expect.take().unwrap(),
this.upgrade.take(), this.upgrade.take(),
this.on_connect.clone(), this.on_connect.clone(),
this.on_connect_ext.clone(),
) )
})) }))
} }
@ -448,7 +464,8 @@ pub struct HttpServiceHandler<T, S: Service, B, X: Service, U: Service> {
expect: CloneableService<X>, expect: CloneableService<X>,
upgrade: Option<CloneableService<U>>, upgrade: Option<CloneableService<U>>,
cfg: ServiceConfig, cfg: ServiceConfig,
on_connect: Option<rc::Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
_t: PhantomData<(T, B, X)>, _t: PhantomData<(T, B, X)>,
} }
@ -469,11 +486,13 @@ where
srv: S, srv: S,
expect: X, expect: X,
upgrade: Option<U>, upgrade: Option<U>,
on_connect: Option<rc::Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>, on_connect: Option<Rc<dyn Fn(&T) -> Box<dyn DataFactory>>>,
on_connect_ext: Option<Rc<ConnectCallback<T>>>,
) -> HttpServiceHandler<T, S, B, X, U> { ) -> HttpServiceHandler<T, S, B, X, U> {
HttpServiceHandler { HttpServiceHandler {
cfg, cfg,
on_connect, on_connect,
on_connect_ext,
srv: CloneableService::new(srv), srv: CloneableService::new(srv),
expect: CloneableService::new(expect), expect: CloneableService::new(expect),
upgrade: upgrade.map(CloneableService::new), upgrade: upgrade.map(CloneableService::new),
@ -543,11 +562,12 @@ where
} }
fn call(&mut self, (io, proto, peer_addr): Self::Request) -> Self::Future { fn call(&mut self, (io, proto, peer_addr): Self::Request) -> Self::Future {
let on_connect = if let Some(ref on_connect) = self.on_connect { let mut connect_extensions = Extensions::new();
Some(on_connect(&io))
} else { let deprecated_on_connect = self.on_connect.as_ref().map(|handler| handler(&io));
None if let Some(ref handler) = self.on_connect_ext {
}; handler(&io, &mut connect_extensions);
}
match proto { match proto {
Protocol::Http2 => HttpServiceHandlerResponse { Protocol::Http2 => HttpServiceHandlerResponse {
@ -555,10 +575,12 @@ where
server::handshake(io), server::handshake(io),
self.cfg.clone(), self.cfg.clone(),
self.srv.clone(), self.srv.clone(),
on_connect, deprecated_on_connect,
connect_extensions,
peer_addr, peer_addr,
))), ))),
}, },
Protocol::Http1 => HttpServiceHandlerResponse { Protocol::Http1 => HttpServiceHandlerResponse {
state: State::H1(h1::Dispatcher::new( state: State::H1(h1::Dispatcher::new(
io, io,
@ -566,7 +588,8 @@ where
self.srv.clone(), self.srv.clone(),
self.expect.clone(), self.expect.clone(),
self.upgrade.clone(), self.upgrade.clone(),
on_connect, deprecated_on_connect,
connect_extensions,
peer_addr, peer_addr,
)), )),
}, },
@ -595,6 +618,7 @@ where
ServiceConfig, ServiceConfig,
CloneableService<S>, CloneableService<S>,
Option<Box<dyn DataFactory>>, Option<Box<dyn DataFactory>>,
Extensions,
Option<net::SocketAddr>, Option<net::SocketAddr>,
)>, )>,
), ),
@ -670,9 +694,16 @@ where
} else { } else {
panic!() panic!()
}; };
let (_, cfg, srv, on_connect, peer_addr) = data.take().unwrap(); let (_, cfg, srv, on_connect, on_connect_data, peer_addr) =
data.take().unwrap();
self.set(State::H2(Dispatcher::new( self.set(State::H2(Dispatcher::new(
srv, conn, on_connect, cfg, None, peer_addr, srv,
conn,
on_connect,
on_connect_data,
cfg,
None,
peer_addr,
))); )));
self.poll(cx) self.poll(cx)
} }

View File

@ -1,9 +1,14 @@
//! Test Various helpers for Actix applications to use during testing. //! Various testing helpers for use in internal and app tests.
use std::convert::TryFrom;
use std::io::{self, Read, Write}; use std::{
use std::pin::Pin; cell::{Ref, RefCell},
use std::str::FromStr; convert::TryFrom,
use std::task::{Context, Poll}; io::{self, Read, Write},
pin::Pin,
rc::Rc,
str::FromStr,
task::{Context, Poll},
};
use actix_codec::{AsyncRead, AsyncWrite}; use actix_codec::{AsyncRead, AsyncWrite};
use bytes::{Bytes, BytesMut}; use bytes::{Bytes, BytesMut};
@ -183,7 +188,7 @@ fn parts(parts: &mut Option<Inner>) -> &mut Inner {
parts.as_mut().expect("cannot reuse test request builder") parts.as_mut().expect("cannot reuse test request builder")
} }
/// Async io buffer /// Async I/O test buffer.
pub struct TestBuffer { pub struct TestBuffer {
pub read_buf: BytesMut, pub read_buf: BytesMut,
pub write_buf: BytesMut, pub write_buf: BytesMut,
@ -191,24 +196,24 @@ pub struct TestBuffer {
} }
impl TestBuffer { impl TestBuffer {
/// Create new TestBuffer instance /// Create new `TestBuffer` instance with initial read buffer.
pub fn new<T>(data: T) -> TestBuffer pub fn new<T>(data: T) -> Self
where where
BytesMut: From<T>, T: Into<BytesMut>,
{ {
TestBuffer { Self {
read_buf: BytesMut::from(data), read_buf: data.into(),
write_buf: BytesMut::new(), write_buf: BytesMut::new(),
err: None, err: None,
} }
} }
/// Create new empty TestBuffer instance /// Create new empty `TestBuffer` instance.
pub fn empty() -> TestBuffer { pub fn empty() -> Self {
TestBuffer::new("") Self::new("")
} }
/// Add extra data to read buffer. /// Add data to read buffer.
pub fn extend_read_buf<T: AsRef<[u8]>>(&mut self, data: T) { pub fn extend_read_buf<T: AsRef<[u8]>>(&mut self, data: T) {
self.read_buf.extend_from_slice(data.as_ref()) self.read_buf.extend_from_slice(data.as_ref())
} }
@ -236,6 +241,7 @@ impl io::Write for TestBuffer {
self.write_buf.extend(buf); self.write_buf.extend(buf);
Ok(buf.len()) Ok(buf.len())
} }
fn flush(&mut self) -> io::Result<()> { fn flush(&mut self) -> io::Result<()> {
Ok(()) Ok(())
} }
@ -268,3 +274,113 @@ impl AsyncWrite for TestBuffer {
Poll::Ready(Ok(())) Poll::Ready(Ok(()))
} }
} }
/// Async I/O test buffer with ability to incrementally add to the read buffer.
#[derive(Clone)]
pub struct TestSeqBuffer(Rc<RefCell<TestSeqInner>>);
impl TestSeqBuffer {
/// Create new `TestBuffer` instance with initial read buffer.
pub fn new<T>(data: T) -> Self
where
T: Into<BytesMut>,
{
Self(Rc::new(RefCell::new(TestSeqInner {
read_buf: data.into(),
write_buf: BytesMut::new(),
err: None,
})))
}
/// Create new empty `TestBuffer` instance.
pub fn empty() -> Self {
Self::new("")
}
pub fn read_buf(&self) -> Ref<'_, BytesMut> {
Ref::map(self.0.borrow(), |inner| &inner.read_buf)
}
pub fn write_buf(&self) -> Ref<'_, BytesMut> {
Ref::map(self.0.borrow(), |inner| &inner.write_buf)
}
pub fn err(&self) -> Ref<'_, Option<io::Error>> {
Ref::map(self.0.borrow(), |inner| &inner.err)
}
/// Add data to read buffer.
pub fn extend_read_buf<T: AsRef<[u8]>>(&mut self, data: T) {
self.0
.borrow_mut()
.read_buf
.extend_from_slice(data.as_ref())
}
}
pub struct TestSeqInner {
read_buf: BytesMut,
write_buf: BytesMut,
err: Option<io::Error>,
}
impl io::Read for TestSeqBuffer {
fn read(&mut self, dst: &mut [u8]) -> Result<usize, io::Error> {
if self.0.borrow().read_buf.is_empty() {
if self.0.borrow().err.is_some() {
Err(self.0.borrow_mut().err.take().unwrap())
} else {
Err(io::Error::new(io::ErrorKind::WouldBlock, ""))
}
} else {
let size = std::cmp::min(self.0.borrow().read_buf.len(), dst.len());
let b = self.0.borrow_mut().read_buf.split_to(size);
dst[..size].copy_from_slice(&b);
Ok(size)
}
}
}
impl io::Write for TestSeqBuffer {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.0.borrow_mut().write_buf.extend(buf);
Ok(buf.len())
}
fn flush(&mut self) -> io::Result<()> {
Ok(())
}
}
impl AsyncRead for TestSeqBuffer {
fn poll_read(
self: Pin<&mut Self>,
_: &mut Context<'_>,
buf: &mut [u8],
) -> Poll<io::Result<usize>> {
let r = self.get_mut().read(buf);
match r {
Ok(n) => Poll::Ready(Ok(n)),
Err(err) if err.kind() == io::ErrorKind::WouldBlock => Poll::Pending,
Err(err) => Poll::Ready(Err(err)),
}
}
}
impl AsyncWrite for TestSeqBuffer {
fn poll_write(
self: Pin<&mut Self>,
_: &mut Context<'_>,
buf: &[u8],
) -> Poll<io::Result<usize>> {
Poll::Ready(self.get_mut().write(buf))
}
fn poll_flush(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll<io::Result<()>> {
Poll::Ready(Ok(()))
}
fn poll_shutdown(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll<io::Result<()>> {
Poll::Ready(Ok(()))
}
}

View File

@ -91,8 +91,7 @@ impl Codec {
} }
} }
impl Encoder for Codec { impl Encoder<Message> for Codec {
type Item = Message;
type Error = ProtocolError; type Error = ProtocolError;
fn encode(&mut self, item: Message, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: Message, dst: &mut BytesMut) -> Result<(), Self::Error> {

View File

@ -4,16 +4,18 @@ use std::task::{Context, Poll};
use actix_codec::{AsyncRead, AsyncWrite, Framed}; use actix_codec::{AsyncRead, AsyncWrite, Framed};
use actix_service::{IntoService, Service}; use actix_service::{IntoService, Service};
use actix_utils::framed; use actix_utils::dispatcher::{Dispatcher as InnerDispatcher, DispatcherError};
use super::{Codec, Frame, Message}; use super::{Codec, Frame, Message};
#[pin_project::pin_project]
pub struct Dispatcher<S, T> pub struct Dispatcher<S, T>
where where
S: Service<Request = Frame, Response = Message> + 'static, S: Service<Request = Frame, Response = Message> + 'static,
T: AsyncRead + AsyncWrite, T: AsyncRead + AsyncWrite,
{ {
inner: framed::Dispatcher<S, T, Codec>, #[pin]
inner: InnerDispatcher<S, T, Codec, Message>,
} }
impl<S, T> Dispatcher<S, T> impl<S, T> Dispatcher<S, T>
@ -25,13 +27,13 @@ where
{ {
pub fn new<F: IntoService<S>>(io: T, service: F) -> Self { pub fn new<F: IntoService<S>>(io: T, service: F) -> Self {
Dispatcher { Dispatcher {
inner: framed::Dispatcher::new(Framed::new(io, Codec::new()), service), inner: InnerDispatcher::new(Framed::new(io, Codec::new()), service),
} }
} }
pub fn with<F: IntoService<S>>(framed: Framed<T, Codec>, service: F) -> Self { pub fn with<F: IntoService<S>>(framed: Framed<T, Codec>, service: F) -> Self {
Dispatcher { Dispatcher {
inner: framed::Dispatcher::new(framed, service), inner: InnerDispatcher::new(framed, service),
} }
} }
} }
@ -43,9 +45,9 @@ where
S::Future: 'static, S::Future: 'static,
S::Error: 'static, S::Error: 'static,
{ {
type Output = Result<(), framed::DispatcherError<S::Error, Codec>>; type Output = Result<(), DispatcherError<S::Error, Codec, Message>>;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> { fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
Pin::new(&mut self.inner).poll(cx) self.project().inner.poll(cx)
} }
} }

View File

@ -4,16 +4,21 @@ use std::ptr::copy_nonoverlapping;
use std::slice; use std::slice;
// Holds a slice guaranteed to be shorter than 8 bytes // Holds a slice guaranteed to be shorter than 8 bytes
struct ShortSlice<'a>(&'a mut [u8]); struct ShortSlice<'a> {
inner: &'a mut [u8],
}
impl<'a> ShortSlice<'a> { impl<'a> ShortSlice<'a> {
/// # Safety
/// Given slice must be shorter than 8 bytes.
unsafe fn new(slice: &'a mut [u8]) -> Self { unsafe fn new(slice: &'a mut [u8]) -> Self {
// Sanity check for debug builds // Sanity check for debug builds
debug_assert!(slice.len() < 8); debug_assert!(slice.len() < 8);
ShortSlice(slice) ShortSlice { inner: slice }
} }
fn len(&self) -> usize { fn len(&self) -> usize {
self.0.len() self.inner.len()
} }
} }
@ -46,15 +51,15 @@ pub(crate) fn apply_mask(buf: &mut [u8], mask_u32: u32) {
} }
} }
#[inline]
// TODO: copy_nonoverlapping here compiles to call memcpy. While it is not so // TODO: copy_nonoverlapping here compiles to call memcpy. While it is not so
// inefficient, it could be done better. The compiler does not understand that // inefficient, it could be done better. The compiler does not understand that
// a `ShortSlice` must be smaller than a u64. // a `ShortSlice` must be smaller than a u64.
#[inline]
#[allow(clippy::needless_pass_by_value)] #[allow(clippy::needless_pass_by_value)]
fn xor_short(buf: ShortSlice<'_>, mask: u64) { fn xor_short(buf: ShortSlice<'_>, mask: u64) {
// Unsafe: we know that a `ShortSlice` fits in a u64 // SAFETY: we know that a `ShortSlice` fits in a u64
unsafe { unsafe {
let (ptr, len) = (buf.0.as_mut_ptr(), buf.0.len()); let (ptr, len) = (buf.inner.as_mut_ptr(), buf.len());
let mut b: u64 = 0; let mut b: u64 = 0;
#[allow(trivial_casts)] #[allow(trivial_casts)]
copy_nonoverlapping(ptr, &mut b as *mut _ as *mut u8, len); copy_nonoverlapping(ptr, &mut b as *mut _ as *mut u8, len);
@ -64,8 +69,9 @@ fn xor_short(buf: ShortSlice<'_>, mask: u64) {
} }
} }
/// # Safety
/// Caller must ensure the buffer has the correct size and alignment.
#[inline] #[inline]
// Unsafe: caller must ensure the buffer has the correct size and alignment
unsafe fn cast_slice(buf: &mut [u8]) -> &mut [u64] { unsafe fn cast_slice(buf: &mut [u8]) -> &mut [u64] {
// Assert correct size and alignment in debug builds // Assert correct size and alignment in debug builds
debug_assert!(buf.len().trailing_zeros() >= 3); debug_assert!(buf.len().trailing_zeros() >= 3);
@ -74,9 +80,9 @@ unsafe fn cast_slice(buf: &mut [u8]) -> &mut [u64] {
slice::from_raw_parts_mut(buf.as_mut_ptr() as *mut u64, buf.len() >> 3) slice::from_raw_parts_mut(buf.as_mut_ptr() as *mut u64, buf.len() >> 3)
} }
#[inline]
// Splits a slice into three parts: an unaligned short head and tail, plus an aligned // Splits a slice into three parts: an unaligned short head and tail, plus an aligned
// u64 mid section. // u64 mid section.
#[inline]
fn align_buf(buf: &mut [u8]) -> (ShortSlice<'_>, &mut [u64], ShortSlice<'_>) { fn align_buf(buf: &mut [u8]) -> (ShortSlice<'_>, &mut [u64], ShortSlice<'_>) {
let start_ptr = buf.as_ptr() as usize; let start_ptr = buf.as_ptr() as usize;
let end_ptr = start_ptr + buf.len(); let end_ptr = start_ptr + buf.len();
@ -91,13 +97,19 @@ fn align_buf(buf: &mut [u8]) -> (ShortSlice<'_>, &mut [u64], ShortSlice<'_>) {
let (tmp, tail) = buf.split_at_mut(end_aligned - start_ptr); let (tmp, tail) = buf.split_at_mut(end_aligned - start_ptr);
let (head, mid) = tmp.split_at_mut(start_aligned - start_ptr); let (head, mid) = tmp.split_at_mut(start_aligned - start_ptr);
// Unsafe: we know the middle section is correctly aligned, and the outer // SAFETY: we know the middle section is correctly aligned, and the outer
// sections are smaller than 8 bytes // sections are smaller than 8 bytes
unsafe { (ShortSlice::new(head), cast_slice(mid), ShortSlice(tail)) } unsafe {
(
ShortSlice::new(head),
cast_slice(mid),
ShortSlice::new(tail),
)
}
} else { } else {
// We didn't cross even one aligned boundary! // We didn't cross even one aligned boundary!
// Unsafe: The outer sections are smaller than 8 bytes // SAFETY: The outer sections are smaller than 8 bytes
unsafe { (ShortSlice::new(buf), &mut [], ShortSlice::new(&mut [])) } unsafe { (ShortSlice::new(buf), &mut [], ShortSlice::new(&mut [])) }
} }
} }

View File

@ -197,13 +197,13 @@ mod tests {
let req = TestRequest::default().method(Method::POST).finish(); let req = TestRequest::default().method(Method::POST).finish();
assert_eq!( assert_eq!(
HandshakeError::GetMethodRequired, HandshakeError::GetMethodRequired,
verify_handshake(req.head()).err().unwrap() verify_handshake(req.head()).unwrap_err(),
); );
let req = TestRequest::default().finish(); let req = TestRequest::default().finish();
assert_eq!( assert_eq!(
HandshakeError::NoWebsocketUpgrade, HandshakeError::NoWebsocketUpgrade,
verify_handshake(req.head()).err().unwrap() verify_handshake(req.head()).unwrap_err(),
); );
let req = TestRequest::default() let req = TestRequest::default()
@ -211,7 +211,7 @@ mod tests {
.finish(); .finish();
assert_eq!( assert_eq!(
HandshakeError::NoWebsocketUpgrade, HandshakeError::NoWebsocketUpgrade,
verify_handshake(req.head()).err().unwrap() verify_handshake(req.head()).unwrap_err(),
); );
let req = TestRequest::default() let req = TestRequest::default()
@ -222,7 +222,7 @@ mod tests {
.finish(); .finish();
assert_eq!( assert_eq!(
HandshakeError::NoConnectionUpgrade, HandshakeError::NoConnectionUpgrade,
verify_handshake(req.head()).err().unwrap() verify_handshake(req.head()).unwrap_err(),
); );
let req = TestRequest::default() let req = TestRequest::default()
@ -237,7 +237,7 @@ mod tests {
.finish(); .finish();
assert_eq!( assert_eq!(
HandshakeError::NoVersionHeader, HandshakeError::NoVersionHeader,
verify_handshake(req.head()).err().unwrap() verify_handshake(req.head()).unwrap_err(),
); );
let req = TestRequest::default() let req = TestRequest::default()
@ -256,7 +256,7 @@ mod tests {
.finish(); .finish();
assert_eq!( assert_eq!(
HandshakeError::UnsupportedVersion, HandshakeError::UnsupportedVersion,
verify_handshake(req.head()).err().unwrap() verify_handshake(req.head()).unwrap_err(),
); );
let req = TestRequest::default() let req = TestRequest::default()
@ -275,7 +275,7 @@ mod tests {
.finish(); .finish();
assert_eq!( assert_eq!(
HandshakeError::BadWebsocketKey, HandshakeError::BadWebsocketKey,
verify_handshake(req.head()).err().unwrap() verify_handshake(req.head()).unwrap_err(),
); );
let req = TestRequest::default() let req = TestRequest::default()

View File

@ -411,8 +411,10 @@ async fn test_h2_on_connect() {
let srv = test_server(move || { let srv = test_server(move || {
HttpService::build() HttpService::build()
.on_connect(|_| 10usize) .on_connect(|_| 10usize)
.on_connect_ext(|_, data| data.insert(20isize))
.h2(|req: Request| { .h2(|req: Request| {
assert!(req.extensions().contains::<usize>()); assert!(req.extensions().contains::<usize>());
assert!(req.extensions().contains::<isize>());
ok::<_, ()>(Response::Ok().finish()) ok::<_, ()>(Response::Ok().finish())
}) })
.openssl(ssl_acceptor()) .openssl(ssl_acceptor())

View File

@ -663,8 +663,10 @@ async fn test_h1_on_connect() {
let srv = test_server(|| { let srv = test_server(|| {
HttpService::build() HttpService::build()
.on_connect(|_| 10usize) .on_connect(|_| 10usize)
.on_connect_ext(|_, data| data.insert(20isize))
.h1(|req: Request| { .h1(|req: Request| {
assert!(req.extensions().contains::<usize>()); assert!(req.extensions().contains::<usize>());
assert!(req.extensions().contains::<isize>());
future::ok::<_, ()>(Response::Ok().finish()) future::ok::<_, ()>(Response::Ok().finish())
}) })
.tcp() .tcp()

View File

@ -8,7 +8,7 @@ use actix_codec::{AsyncRead, AsyncWrite, Framed};
use actix_http::{body, h1, ws, Error, HttpService, Request, Response}; use actix_http::{body, h1, ws, Error, HttpService, Request, Response};
use actix_http_test::test_server; use actix_http_test::test_server;
use actix_service::{fn_factory, Service}; use actix_service::{fn_factory, Service};
use actix_utils::framed::Dispatcher; use actix_utils::dispatcher::Dispatcher;
use bytes::Bytes; use bytes::Bytes;
use futures_util::future; use futures_util::future;
use futures_util::task::{Context, Poll}; use futures_util::task::{Context, Poll};
@ -59,7 +59,7 @@ where
.await .await
.unwrap(); .unwrap();
Dispatcher::new(framed.into_framed(ws::Codec::new()), service) Dispatcher::new(framed.replace_codec(ws::Codec::new()), service)
.await .await
.map_err(|_| panic!()) .map_err(|_| panic!())
}; };

View File

@ -1,11 +0,0 @@
# Identity service for actix web framework [![Build Status](https://travis-ci.org/actix/actix-web.svg?branch=master)](https://travis-ci.org/actix/actix-web) [![codecov](https://codecov.io/gh/actix/actix-web/branch/master/graph/badge.svg)](https://codecov.io/gh/actix/actix-web) [![crates.io](https://meritbadge.herokuapp.com/actix-identity)](https://crates.io/crates/actix-identity) [![Join the chat at https://gitter.im/actix/actix](https://badges.gitter.im/actix/actix.svg)](https://gitter.im/actix/actix?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
**This crate moved to https://github.com/actix/actix-extras.**
## Documentation & community resources
* [User Guide](https://actix.rs/docs/)
* [API Documentation](https://docs.rs/actix-identity/)
* [Chat on gitter](https://gitter.im/actix/actix)
* Cargo package: [actix-session](https://crates.io/crates/actix-identity)
* Minimum supported Rust version: 1.34 or later

View File

@ -1,6 +1,15 @@
# Changes # Changes
## Unreleased - 2020-xx-xx ## Unreleased - 2020-xx-xx
* Fix multipart consuming payload before header checks #1513
## 3.0.0 - 2020-09-11
* No significant changes from `3.0.0-beta.2`.
## 3.0.0-beta.2 - 2020-09-10
* Update `actix-*` dependencies to latest versions.
## 0.3.0-beta.1 - 2020-07-15 ## 0.3.0-beta.1 - 2020-07-15

View File

@ -1,6 +1,6 @@
[package] [package]
name = "actix-multipart" name = "actix-multipart"
version = "0.3.0-beta.1" version = "0.3.0"
authors = ["Nikolay Kim <fafhrd91@gmail.com>"] authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
description = "Multipart support for actix web framework." description = "Multipart support for actix web framework."
readme = "README.md" readme = "README.md"
@ -16,9 +16,9 @@ name = "actix_multipart"
path = "src/lib.rs" path = "src/lib.rs"
[dependencies] [dependencies]
actix-web = { version = "3.0.0-beta.1", default-features = false } actix-web = { version = "3.0.0", default-features = false }
actix-service = "1.0.1" actix-service = "1.0.6"
actix-utils = "1.0.3" actix-utils = "2.0.0"
bytes = "0.5.3" bytes = "0.5.3"
derive_more = "0.99.2" derive_more = "0.99.2"
httparse = "1.3" httparse = "1.3"
@ -29,4 +29,4 @@ twoway = "0.2"
[dev-dependencies] [dev-dependencies]
actix-rt = "1.0.0" actix-rt = "1.0.0"
actix-http = "2.0.0-beta.1" actix-http = "2.0.0"

View File

@ -36,6 +36,9 @@ impl FromRequest for Multipart {
#[inline] #[inline]
fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future { fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future {
ok(Multipart::new(req.headers(), payload.take())) ok(match Multipart::boundary(req.headers()) {
Ok(boundary) => Multipart::from_boundary(boundary, payload.take()),
Err(err) => Multipart::from_error(err),
})
} }
} }

View File

@ -1,3 +1,6 @@
//! Multipart form support for Actix web.
#![deny(rust_2018_idioms)]
#![allow(clippy::borrow_interior_mutable_const)] #![allow(clippy::borrow_interior_mutable_const)]
mod error; mod error;

View File

@ -1,4 +1,5 @@
//! Multipart payload support //! Multipart payload support
use std::cell::{Cell, RefCell, RefMut}; use std::cell::{Cell, RefCell, RefMut};
use std::convert::TryFrom; use std::convert::TryFrom;
use std::marker::PhantomData; use std::marker::PhantomData;
@ -63,26 +64,13 @@ impl Multipart {
S: Stream<Item = Result<Bytes, PayloadError>> + Unpin + 'static, S: Stream<Item = Result<Bytes, PayloadError>> + Unpin + 'static,
{ {
match Self::boundary(headers) { match Self::boundary(headers) {
Ok(boundary) => Multipart { Ok(boundary) => Multipart::from_boundary(boundary, stream),
error: None, Err(err) => Multipart::from_error(err),
safety: Safety::new(),
inner: Some(Rc::new(RefCell::new(InnerMultipart {
boundary,
payload: PayloadRef::new(PayloadBuffer::new(Box::new(stream))),
state: InnerState::FirstBoundary,
item: InnerMultipartItem::None,
}))),
},
Err(err) => Multipart {
error: Some(err),
safety: Safety::new(),
inner: None,
},
} }
} }
/// Extract boundary info from headers. /// Extract boundary info from headers.
fn boundary(headers: &HeaderMap) -> Result<String, MultipartError> { pub(crate) fn boundary(headers: &HeaderMap) -> Result<String, MultipartError> {
if let Some(content_type) = headers.get(&header::CONTENT_TYPE) { if let Some(content_type) = headers.get(&header::CONTENT_TYPE) {
if let Ok(content_type) = content_type.to_str() { if let Ok(content_type) = content_type.to_str() {
if let Ok(ct) = content_type.parse::<mime::Mime>() { if let Ok(ct) = content_type.parse::<mime::Mime>() {
@ -101,6 +89,32 @@ impl Multipart {
Err(MultipartError::NoContentType) Err(MultipartError::NoContentType)
} }
} }
/// Create multipart instance for given boundary and stream
pub(crate) fn from_boundary<S>(boundary: String, stream: S) -> Multipart
where
S: Stream<Item = Result<Bytes, PayloadError>> + Unpin + 'static,
{
Multipart {
error: None,
safety: Safety::new(),
inner: Some(Rc::new(RefCell::new(InnerMultipart {
boundary,
payload: PayloadRef::new(PayloadBuffer::new(Box::new(stream))),
state: InnerState::FirstBoundary,
item: InnerMultipartItem::None,
}))),
}
}
/// Create Multipart instance from MultipartError
pub(crate) fn from_error(err: MultipartError) -> Multipart {
Multipart {
error: Some(err),
safety: Safety::new(),
inner: None,
}
}
} }
impl Stream for Multipart { impl Stream for Multipart {
@ -108,7 +122,7 @@ impl Stream for Multipart {
fn poll_next( fn poll_next(
mut self: Pin<&mut Self>, mut self: Pin<&mut Self>,
cx: &mut Context, cx: &mut Context<'_>,
) -> Poll<Option<Self::Item>> { ) -> Poll<Option<Self::Item>> {
if let Some(err) = self.error.take() { if let Some(err) = self.error.take() {
Poll::Ready(Some(Err(err))) Poll::Ready(Some(Err(err)))
@ -244,7 +258,7 @@ impl InnerMultipart {
fn poll( fn poll(
&mut self, &mut self,
safety: &Safety, safety: &Safety,
cx: &mut Context, cx: &mut Context<'_>,
) -> Poll<Option<Result<Field, MultipartError>>> { ) -> Poll<Option<Result<Field, MultipartError>>> {
if self.state == InnerState::Eof { if self.state == InnerState::Eof {
Poll::Ready(None) Poll::Ready(None)
@ -416,7 +430,10 @@ impl Field {
impl Stream for Field { impl Stream for Field {
type Item = Result<Bytes, MultipartError>; type Item = Result<Bytes, MultipartError>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> { fn poll_next(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Self::Item>> {
if self.safety.current() { if self.safety.current() {
let mut inner = self.inner.borrow_mut(); let mut inner = self.inner.borrow_mut();
if let Some(mut payload) = if let Some(mut payload) =
@ -434,7 +451,7 @@ impl Stream for Field {
} }
impl fmt::Debug for Field { impl fmt::Debug for Field {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
writeln!(f, "\nField: {}", self.ct)?; writeln!(f, "\nField: {}", self.ct)?;
writeln!(f, " boundary: {}", self.inner.borrow().boundary)?; writeln!(f, " boundary: {}", self.inner.borrow().boundary)?;
writeln!(f, " headers:")?; writeln!(f, " headers:")?;
@ -689,7 +706,7 @@ impl Safety {
self.clean.get() self.clean.get()
} }
fn clone(&self, cx: &mut Context) -> Safety { fn clone(&self, cx: &mut Context<'_>) -> Safety {
let payload = Rc::clone(&self.payload); let payload = Rc::clone(&self.payload);
let s = Safety { let s = Safety {
task: LocalWaker::new(), task: LocalWaker::new(),
@ -708,9 +725,7 @@ impl Drop for Safety {
if Rc::strong_count(&self.payload) != self.level { if Rc::strong_count(&self.payload) != self.level {
self.clean.set(true); self.clean.set(true);
} }
if let Some(task) = self.task.take() { self.task.wake();
task.wake()
}
} }
} }
@ -734,7 +749,7 @@ impl PayloadBuffer {
} }
} }
fn poll_stream(&mut self, cx: &mut Context) -> Result<(), PayloadError> { fn poll_stream(&mut self, cx: &mut Context<'_>) -> Result<(), PayloadError> {
loop { loop {
match Pin::new(&mut self.stream).poll_next(cx) { match Pin::new(&mut self.stream).poll_next(cx) {
Poll::Ready(Some(Ok(data))) => self.buf.extend_from_slice(&data), Poll::Ready(Some(Ok(data))) => self.buf.extend_from_slice(&data),
@ -811,6 +826,8 @@ mod tests {
use actix_http::h1::Payload; use actix_http::h1::Payload;
use actix_utils::mpsc; use actix_utils::mpsc;
use actix_web::http::header::{DispositionParam, DispositionType}; use actix_web::http::header::{DispositionParam, DispositionType};
use actix_web::test::TestRequest;
use actix_web::FromRequest;
use bytes::Bytes; use bytes::Bytes;
use futures_util::future::lazy; use futures_util::future::lazy;
@ -887,7 +904,7 @@ mod tests {
fn poll_next( fn poll_next(
self: Pin<&mut Self>, self: Pin<&mut Self>,
cx: &mut Context, cx: &mut Context<'_>,
) -> Poll<Option<Self::Item>> { ) -> Poll<Option<Self::Item>> {
let this = self.get_mut(); let this = self.get_mut();
if !this.ready { if !this.ready {
@ -1147,4 +1164,38 @@ mod tests {
); );
assert_eq!(payload.buf.len(), 0); assert_eq!(payload.buf.len(), 0);
} }
#[actix_rt::test]
async fn test_multipart_from_error() {
let err = MultipartError::NoContentType;
let mut multipart = Multipart::from_error(err);
assert!(multipart.next().await.unwrap().is_err())
}
#[actix_rt::test]
async fn test_multipart_from_boundary() {
let (_, payload) = create_stream();
let (_, headers) = create_simple_request_with_header();
let boundary = Multipart::boundary(&headers);
assert!(boundary.is_ok());
let _ = Multipart::from_boundary(boundary.unwrap(), payload);
}
#[actix_rt::test]
async fn test_multipart_payload_consumption() {
// with sample payload and HttpRequest with no headers
let (_, inner_payload) = Payload::create(false);
let mut payload = actix_web::dev::Payload::from(inner_payload);
let req = TestRequest::default().to_http_request();
// multipart should generate an error
let mut mp = Multipart::from_request(&req, &mut payload).await.unwrap();
assert!(mp.next().await.unwrap().is_err());
// and should not consume the payload
match payload {
actix_web::dev::Payload::H1(_) => {} //expected
_ => unreachable!(),
}
}
} }

View File

@ -1,11 +0,0 @@
# Session for actix web framework [![Build Status](https://travis-ci.org/actix/actix-web.svg?branch=master)](https://travis-ci.org/actix/actix-web) [![codecov](https://codecov.io/gh/actix/actix-web/branch/master/graph/badge.svg)](https://codecov.io/gh/actix/actix-web) [![crates.io](https://meritbadge.herokuapp.com/actix-session)](https://crates.io/crates/actix-session) [![Join the chat at https://gitter.im/actix/actix](https://badges.gitter.im/actix/actix.svg)](https://gitter.im/actix/actix?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
**This crate moved to https://github.com/actix/actix-extras.**
## Documentation & community resources
* [User Guide](https://actix.rs/docs/)
* [API Documentation](https://docs.rs/actix-session/)
* [Chat on gitter](https://gitter.im/actix/actix)
* Cargo package: [actix-session](https://crates.io/crates/actix-session)
* Minimum supported Rust version: 1.34 or later

View File

@ -1,6 +1,14 @@
# Changes # Changes
## [Unreleased] - 2020-xx-xx ## Unreleased - 2020-xx-xx
* Upgrade `pin-project` to `1.0`.
## 3.0.0 - 2020-09-11
* No significant changes from `3.0.0-beta.2`.
## 3.0.0-beta.2 - 2020-09-10
* Update `actix-*` dependencies to latest versions.
## [3.0.0-beta.1] - 2020-xx-xx ## [3.0.0-beta.1] - 2020-xx-xx

View File

@ -1,6 +1,6 @@
[package] [package]
name = "actix-web-actors" name = "actix-web-actors"
version = "3.0.0-beta.1" version = "3.0.0"
authors = ["Nikolay Kim <fafhrd91@gmail.com>"] authors = ["Nikolay Kim <fafhrd91@gmail.com>"]
description = "Actix actors support for actix web framework." description = "Actix actors support for actix web framework."
readme = "README.md" readme = "README.md"
@ -16,16 +16,16 @@ name = "actix_web_actors"
path = "src/lib.rs" path = "src/lib.rs"
[dependencies] [dependencies]
actix = "0.10.0-alpha.2" actix = "0.10.0"
actix-web = { version = "3.0.0-beta.1", default-features = false } actix-web = { version = "3.0.0", default-features = false }
actix-http = "2.0.0-beta.1" actix-http = "2.0.0"
actix-codec = "0.2.0" actix-codec = "0.3.0"
bytes = "0.5.2" bytes = "0.5.2"
futures-channel = { version = "0.3.5", default-features = false } futures-channel = { version = "0.3.5", default-features = false }
futures-core = { version = "0.3.5", default-features = false } futures-core = { version = "0.3.5", default-features = false }
pin-project = "0.4.17" pin-project = "1.0.0"
[dev-dependencies] [dev-dependencies]
actix-rt = "1.0.0" actix-rt = "1.1.1"
env_logger = "0.7" env_logger = "0.7"
futures-util = { version = "0.3.5", default-features = false } futures-util = { version = "0.3.5", default-features = false }

View File

@ -1,5 +1,8 @@
#![allow(clippy::borrow_interior_mutable_const)]
//! Actix actors integration for Actix web framework //! Actix actors integration for Actix web framework
#![deny(rust_2018_idioms)]
#![allow(clippy::borrow_interior_mutable_const)]
mod context; mod context;
pub mod ws; pub mod ws;

View File

@ -164,7 +164,6 @@ pub fn handshake_with_protocols(
let mut response = HttpResponse::build(StatusCode::SWITCHING_PROTOCOLS) let mut response = HttpResponse::build(StatusCode::SWITCHING_PROTOCOLS)
.upgrade("websocket") .upgrade("websocket")
.header(header::TRANSFER_ENCODING, "chunked")
.header(header::SEC_WEBSOCKET_ACCEPT, key.as_str()) .header(header::SEC_WEBSOCKET_ACCEPT, key.as_str())
.take(); .take();
@ -664,10 +663,10 @@ mod tests {
) )
.to_http_request(); .to_http_request();
assert_eq!( let resp = handshake(&req).unwrap().finish();
StatusCode::SWITCHING_PROTOCOLS, assert_eq!(StatusCode::SWITCHING_PROTOCOLS, resp.status());
handshake(&req).unwrap().finish().status() assert_eq!(None, resp.headers().get(&header::CONTENT_LENGTH));
); assert_eq!(None, resp.headers().get(&header::TRANSFER_ENCODING));
let req = TestRequest::default() let req = TestRequest::default()
.header( .header(

View File

@ -3,53 +3,66 @@
## Unreleased - 2020-xx-xx ## Unreleased - 2020-xx-xx
## 0.4.0 - 2020-09-20
* Added compile success and failure testing. [#1677]
* Add `route` macro for supporting multiple HTTP methods guards. [#1674]
[#1677]: https://github.com/actix/actix-web/pull/1677
[#1674]: https://github.com/actix/actix-web/pull/1674
## 0.3.0 - 2020-09-11
* No significant changes from `0.3.0-beta.1`.
## 0.3.0-beta.1 - 2020-07-14 ## 0.3.0-beta.1 - 2020-07-14
* Add main entry-point macro that uses re-exported runtime. [#1559] * Add main entry-point macro that uses re-exported runtime. [#1559]
[#1559]: https://github.com/actix/actix-web/pull/1559 [#1559]: https://github.com/actix/actix-web/pull/1559
## [0.2.2] - 2020-05-23 ## 0.2.2 - 2020-05-23
* Add resource middleware on actix-web-codegen [#1467] * Add resource middleware on actix-web-codegen [#1467]
[#1467]: https://github.com/actix/actix-web/pull/1467 [#1467]: https://github.com/actix/actix-web/pull/1467
## [0.2.1] - 2020-02-25
## 0.2.1 - 2020-02-25
* Add `#[allow(missing_docs)]` attribute to generated structs [#1368] * Add `#[allow(missing_docs)]` attribute to generated structs [#1368]
* Allow the handler function to be named as `config` [#1290] * Allow the handler function to be named as `config` [#1290]
[#1368]: https://github.com/actix/actix-web/issues/1368 [#1368]: https://github.com/actix/actix-web/issues/1368
[#1290]: https://github.com/actix/actix-web/issues/1290 [#1290]: https://github.com/actix/actix-web/issues/1290
## [0.2.0] - 2019-12-13
## 0.2.0 - 2019-12-13
* Generate code for actix-web 2.0 * Generate code for actix-web 2.0
## [0.1.3] - 2019-10-14
## 0.1.3 - 2019-10-14
* Bump up `syn` & `quote` to 1.0 * Bump up `syn` & `quote` to 1.0
* Provide better error message * Provide better error message
## [0.1.2] - 2019-06-04
## 0.1.2 - 2019-06-04
* Add macros for head, options, trace, connect and patch http methods * Add macros for head, options, trace, connect and patch http methods
## [0.1.1] - 2019-06-01
## 0.1.1 - 2019-06-01
* Add syn "extra-traits" feature * Add syn "extra-traits" feature
## [0.1.0] - 2019-05-18
## 0.1.0 - 2019-05-18
* Release * Release
## [0.1.0-beta.1] - 2019-04-20
## 0.1.0-beta.1 - 2019-04-20
* Gen code for actix-web 1.0.0-beta.1 * Gen code for actix-web 1.0.0-beta.1
## [0.1.0-alpha.6] - 2019-04-14
## 0.1.0-alpha.6 - 2019-04-14
* Gen code for actix-web 1.0.0-alpha.6 * Gen code for actix-web 1.0.0-alpha.6
## [0.1.0-alpha.1] - 2019-03-28
## 0.1.0-alpha.1 - 2019-03-28
* Initial impl * Initial impl

View File

@ -1,6 +1,6 @@
[package] [package]
name = "actix-web-codegen" name = "actix-web-codegen"
version = "0.3.0-beta.1" version = "0.4.0"
description = "Actix web proc macros" description = "Actix web proc macros"
readme = "README.md" readme = "README.md"
homepage = "https://actix.rs" homepage = "https://actix.rs"
@ -19,6 +19,8 @@ syn = { version = "1", features = ["full", "parsing"] }
proc-macro2 = "1" proc-macro2 = "1"
[dev-dependencies] [dev-dependencies]
actix-rt = "1.0.0" actix-rt = "1.1.1"
actix-web = "3.0.0-beta.1" actix-web = "3.0.0"
futures-util = { version = "0.3.5", default-features = false } futures-util = { version = "0.3.5", default-features = false }
trybuild = "1"
rustversion = "1"

Some files were not shown because too many files have changed in this diff Show More