minio/pkg/dsync
Harshavardhana 4550ac6fff
fix: refactor locks to apply them uniquely per node (#11052)
This refactor is done for few reasons below

- to avoid deadlocks in scenarios when number
  of nodes are smaller < actual erasure stripe
  count where in N participating local lockers
  can lead to deadlocks across systems.

- avoids expiry routines to run 1000 of separate
  network operations and routes per disk where
  as each of them are still accessing one single
  local entity.

- it is ideal to have since globalLockServer
  per instance.

- In a 32node deployment however, each server
  group is still concentrated towards the
  same set of lockers that partipicate during
  the write/read phase, unlike previous minio/dsync
  implementation - this potentially avoids send
  32 requests instead we will still send at max
  requests of unique nodes participating in a
  write/read phase.

- reduces overall chattiness on smaller setups.
2020-12-10 07:28:37 -08:00
..
.gitignore Support MinIO to be deployed on more than 32 nodes (#8492) 2019-11-13 12:17:45 -08:00
drwmutex.go fix: refactor locks to apply them uniquely per node (#11052) 2020-12-10 07:28:37 -08:00
drwmutex_test.go allow lock tolerance to match storage-class drive tolerance (#10270) 2020-08-14 18:17:14 -07:00
dsync-server_test.go fix: Speed up multi-object delete by taking bulk locks (#8974) 2020-02-21 11:29:57 +05:30
dsync.go fix: add lock ownership to expire locks (#10571) 2020-09-25 19:21:52 -07:00
dsync_test.go fix: add lock ownership to expire locks (#10571) 2020-09-25 19:21:52 -07:00
rpc-client-impl_test.go fix: handle concurrent lockers with multiple optimizations (#10640) 2020-10-08 12:32:32 -07:00
rpc-client-interface.go expire lockers if lockers are offline (#10749) 2020-10-24 13:23:16 -07:00