Large JWT's could cause issue because of header or body sizes of the
HTTP request could get too large when you are a member of a lot of organizations.
This PR removes these specific keys since they are not used either
client side or server side.
Because Bitwarden does add these in there JWT's i would suggest to keep
the code we had but then commented out as a reference.
Removing it and searching for this when needed would be a waist of time.
Fixes#4156
This is a WIP for adding organization token login support.
It has basic token login and verification support, but that's about it.
This branch is a refresh of the previous version, and will contain code
from a PR based upon my previous branch.
All uses of `get_random()` were in the form of:
`&get_random(vec![0u8; SIZE])`
with `SIZE` being a constant.
Building a `Vec` is unnecessary for two reasons. First, it uses a
very short-lived dynamic memory allocation. Second, a `Vec` is a
resizable object, which is useless in those context when random
data have a fixed size and will only be read.
`get_random_bytes()` takes a constant as a generic parameter and
returns an array with the requested number of random bytes.
Stack safety analysis: the random bytes will be allocated on the
caller stack for a very short time (until the encoding function has
been called on the data). In some cases, the random bytes take
less room than the `Vec` did (a `Vec` is 24 bytes on a 64 bit
computer). The maximum used size is 180 bytes, which makes it
for 0.008% of the default stack size for a Rust thread (2MiB),
so this is a non-issue.
Also, most of the uses of those random bytes are to encode them
using an `Encoding`. The function `crypto::encode_random_bytes()`
generates random bytes and encode them with the provided
`Encoding`, leading to code deduplication.
`generate_id()` has also been converted to use a constant generic
parameter as well since the length of the requested String is always
a constant.
A bit inspired by @paolobarbolini from this commit at lettre https://github.com/lettre/lettre/pull/784 .
I added a few more clippy lints here, and fixed the resulted issues.
Overall i think this could help in preventing future issues, and maybe
even peformance problems. It also makes some code a bit more clear.
We could always add more if we want to, i left a few out which i think
arn't that huge of an issue. Some like the `unused_async` are nice,
which resulted in a few `async` removals.
Some others are maybe a bit more estatic, like `string_to_string`, but i
think it looks better to use `clone` in those cases instead of `to_string` while they already are a string.
Improved sync speed by resolving the N+1 query issues.
Solves #1402 and Solves #1453
With this change there is just one query done to retreive all the
important data, and matching is done in-code/memory.
With a very large database the sync time went down about 3 times.
Also updated misc crates and Github Actions versions.
- Updated jsonwebtoken to latest version
- Trim `username` received from the login form ( Fixes#2348 )
- Make uuid and user_uuid a combined primary key for the devices table ( Fixes#2295 )
- Updated crates including regex which contains a CVE ( https://blog.rust-lang.org/2022/03/08/cve-2022-24713.html )
This is a rather large PR which updates the async branch to have all the
database methods as an async fn.
Some iter/map logic needed to be changed to a stream::iter().then(), but
besides that most changes were just adding async/await where needed.
An incomplete 2FA login is one where the correct master password was provided,
but the 2FA token or action required to complete the login was not provided
within the configured time limit. This potentially indicates that the user's
master password has been compromised, but the login was blocked by 2FA.
Be aware that the 2FA step can usually still be completed after the email
notification has already been sent out, which could be confusing. Therefore,
the incomplete 2FA time limit should be long enough that this situation would
be unlikely. This feature can also be disabled entirely if desired.
Diesel requires the following changes:
- Separate connection and pool types per connection, the generate_connections! macro generates an enum with a variant per db type
- Separate migrations and schemas, these were always imported as one type depending on db feature, now they are all imported under different module names
- Separate model objects per connection, the db_object! macro generates one object for each connection with the diesel macros, a generic object, and methods to convert between the connection-specific and the generic ones
- Separate connection queries, the db_run! macro allows writing only one that gets compiled for all databases or multiple ones
PostgreSQL updates/inserts ignored None/null values.
This is nice for new entries, but not for updates.
Added derive option to allways add these none/null values for Option<>
variables.
This solves issue #965
This includes migrations as well as Dockerfile's for amd64.
The biggest change is that replace_into isn't supported by Diesel for the
PostgreSQL backend, instead requiring the use of on_conflict. This
unfortunately requires a branch for save() on all of the models currently
using replace_into.
Improved error logging, now it won't show a generic error message in some situations.
Removed delete device, which is not needed as it will be overwritten later.
Logged more info when an error occurs saving a device.
Added orgmanager to JWT claims.
Known missing:
- import ciphers, create ciphers types other than login and card, update ciphers
- clear and put device_tokens
- Equivalent domains
- Organizations