As mentioned in #3111, using a very very large vault causes some issues.
Mainly because of a SQLite limit, but, it could also cause issue on
MariaDB/MySQL or PostgreSQL. It also uses a lot of memory, and memory
allocations.
This PR solves this by removing the need of all the cipher_uuid's just
to gather the correct attachments.
It will use the user_uuid and org_uuid's to get all attachments linked
to both, weither the user has access to them or not. This isn't an
issue, since the matching is done per cipher and the attachment data is
only returned if there is a matching cipher to where the user has access to.
I also modified some code to be able to use `::with_capacity(n)` where
possible. This prevents re-allocations if the `Vec` increases size,
which will happen a lot if there are a lot of ciphers.
According to my tests measuring the time it takes to sync, it seems to
have lowered the duration a bit more.
Fixes#3111
Improved sync speed by resolving the N+1 query issues.
Solves #1402 and Solves #1453
With this change there is just one query done to retreive all the
important data, and matching is done in-code/memory.
With a very large database the sync time went down about 3 times.
Also updated misc crates and Github Actions versions.
This is a rather large PR which updates the async branch to have all the
database methods as an async fn.
Some iter/map logic needed to be changed to a stream::iter().then(), but
besides that most changes were just adding async/await where needed.
When using MariaDB v10.5+ Foreign-Key errors were popping up because of
some changes in that version. To mitigate this on MariaDB and other
MySQL forks those errors are now catched, and instead of a replace_into
an update will happen. I have tested this as thorough as possible with
MariaDB 10.5, 10.4, 10.3 and the default MySQL on Ubuntu Focal. And
tested it again using sqlite, all seems to be ok on all tables.
resolves#1081. resolves#1065, resolves#1050
Diesel requires the following changes:
- Separate connection and pool types per connection, the generate_connections! macro generates an enum with a variant per db type
- Separate migrations and schemas, these were always imported as one type depending on db feature, now they are all imported under different module names
- Separate model objects per connection, the db_object! macro generates one object for each connection with the diesel macros, a generic object, and methods to convert between the connection-specific and the generic ones
- Separate connection queries, the db_run! macro allows writing only one that gets compiled for all databases or multiple ones
PostgreSQL updates/inserts ignored None/null values.
This is nice for new entries, but not for updates.
Added derive option to allways add these none/null values for Option<>
variables.
This solves issue #965
This includes migrations as well as Dockerfile's for amd64.
The biggest change is that replace_into isn't supported by Diesel for the
PostgreSQL backend, instead requiring the use of on_conflict. This
unfortunately requires a branch for save() on all of the models currently
using replace_into.
Known missing:
- import ciphers, create ciphers types other than login and card, update ciphers
- clear and put device_tokens
- Equivalent domains
- Organizations