Compare commits

...

74 commits

Author SHA1 Message Date
Harshavardhana 68c5ad83fb
fix: backend not reachable should be more descriptive (#13634) 2021-11-10 22:33:17 -08:00
Harshavardhana 5acc8c0134
add multi-site replication tests (#13631) 2021-11-10 18:18:09 -08:00
Klaus Post c897b6a82d
fix: missing entries on first list resume (#13627)
On first list resume or when specifying a custom markers entries could be missed in rare cases.

Do conservative truncation of entries when forwarding.

Replaces #13619
2021-11-10 10:41:21 -08:00
Shireesh Anjal d008e90d50
Support dynamic reset of minio config (#13626)
If a given MinIO config is dynamic (can be changed without restart),
ensure that it can be reset also without restart.

Signed-off-by: Shireesh Anjal <shireesh@minio.io>
2021-11-10 10:01:32 -08:00
Harshavardhana ea820b30bf
fix: use equalFold() instead of lower and compare (#13624) 2021-11-10 08:12:50 -08:00
Poorna K 03725dc015
Default multipart caching to writethrough (#13613)
when `MINIO_CACHE_COMMIT` is set.

- `writeback` caching applies only to single 
uploads. When cache commit mode is 
`writeback`, default multipart caching to be
synchronous.

- Add writethrough caching for single uploads
2021-11-10 08:12:03 -08:00
Harshavardhana 0a6f9bc1eb
allocate new highwayhash for each string hash (#13623)
fixes #13622
2021-11-09 15:28:08 -08:00
Aditya Manthramurthy 1946922de3
Add CI for etcd IAM backend (#13614)
Runs when ETCD_SERVER env var is set
2021-11-09 09:25:13 -08:00
Minio Trusted edf1f4233b Update yaml files to latest version RELEASE.2021-11-09T03-21-45Z 2021-11-09 04:51:05 +00:00
Harshavardhana f4b55ea7a7 update console to v0.12.2 2021-11-08 19:21:45 -08:00
Aditya Manthramurthy 8dfd1f03e9
fix: IAM initialization crash with etcd store (#13612) 2021-11-08 12:55:27 -08:00
Harshavardhana acf26c5ab7 re-arrange metacache struct to be optimal (#13609) 2021-11-08 10:26:08 -08:00
Klaus Post d9800c8135
fix: make sure to log panic in handlers (#13611) 2021-11-08 09:28:13 -08:00
Harshavardhana 02bef7560f add missing Copyright header 2021-11-08 09:13:15 -08:00
Daniel A. Ochoa 07dd0692b6
Fix hdfs gateway concurrent map writes (#13596)
Co-authored-by: Harshavardhana <harsha@minio.io>
2021-11-08 09:07:58 -08:00
Klaus Post 4f3317effe
Close stream on panic (#13605)
Always close streamHTTPResponse on panic on main thread to avoid 
write/flush after response handler has returned.
2021-11-08 08:41:27 -08:00
Klaus Post 9afdbe3648
fix: RLock UID memory leak (#13607)
UID were misnamed in RLock, leading to memory buildup.

Regression in #13430
2021-11-08 07:35:50 -08:00
Aditya Manthramurthy fe0df01448
fix: locking in some situations for IAM store (#13595)
- Fix a bug where read locks were taken instead of write locks in some situations
- Remove an unnecessary lock when updating based on notifications.
2021-11-07 17:42:32 -08:00
Harshavardhana 12e6907512
apply spelling checks for US locale (#13599) 2021-11-07 01:22:59 -08:00
Harshavardhana 5aef492b4c update disk-caching design guide 2021-11-07 01:21:34 -08:00
Harshavardhana 5d7ed8ff7d update S3 gateway limitation docs 2021-11-06 23:24:48 -07:00
Harshavardhana b1754fc5ff update go.mod and CREDITS 2021-11-06 11:39:17 -07:00
Harshavardhana 19bbf3e142 update CREDITS 2021-11-05 13:53:21 -07:00
jiangfucheng e1755275a0
resume heal from previous object instead of bucket after server restart (#13581) 2021-11-05 13:10:41 -07:00
Harshavardhana 520037e721
move to jwt-go v4 with correct releases (#13586) 2021-11-05 12:20:08 -07:00
Minio Trusted cbb0828ab8 Update yaml files to latest version RELEASE.2021-11-05T09-16-26Z 2021-11-05 10:03:56 +00:00
Andreas Auernhammer 8774d10bdf
sts: always verify the key usage of client certificates (#13583)
This commit makes the MinIO server behavior more consistent
w.r.t. key usage verification.

When MinIO verifies the client certificates it also checks
that the client certificate is valid of client authentication
(or any (i.e. wildcard) usage).

However, the MinIO server used to not verify the client key usage
when client certificate verification was disabled.
Now, the MinIO server verifies the client key usage even when
client certificate verification has been disabled. This makes
the MinIO behavior more consistent from a client's perspective.

Now, a client certificate has to be valid for client authentication
in all cases.

Signed-off-by: Andreas Auernhammer <hi@aead.dev>
2021-11-05 02:16:26 -07:00
Harshavardhana df9f479d58 update console to v0.12.1 2021-11-04 16:43:59 -07:00
Harshavardhana 8bb52c9c2a
fix: ignore disks that are available but not writable (#13585)
This is to allow replacing drives while some drives
while available are not writable.
2021-11-04 16:42:49 -07:00
Aditya Manthramurthy 947c423824
fix: user DN filtering that causes some unnecessary logs (#13584)
Additionally, remove the unnecessary `isUsingLookupBind` field in the LDAP struct
2021-11-04 13:11:20 -07:00
Harshavardhana c3d24fb26d
use single encoder for sending speedtest results (#13579)
Bonus: if runs have PUT higher then capture it anyways
to display an unexpected result, which provides a way
to understand what might be slowing things down on the
system.

For example on a Data24 WDC setup it is clearly visible
there is a bug in the hardware.

```
./mc admin speedtest wdc/
⠧ Running speedtest (With 64 MiB object size, 32 concurrency) PUT: 31 GiB/s GET: 24 GiB/s
⠹ Running speedtest (With 64 MiB object size, 48 concurrency) PUT: 38 GiB/s GET: 24 GiB/s

MinIO 2021-11-04T06:08:33Z, 6 servers, 48 drives
PUT: 38 GiB/s, 605 objs/s
GET: 24 GiB/s, 383 objs/s
```

Reads are almost 14GiB/sec slower than Writes which
is practically not possible.
2021-11-04 12:11:52 -07:00
Pavel M 112f9ae087
claim exp should be integer (#13582)
claim exp can be 

- float64
- json.Number

As per OIDC spec https://openid.net/specs/openid-connect-core-1_0.html#IDToken

Avoid using strings since the upstream library only supports these two types now.
2021-11-04 12:03:43 -07:00
Aditya Manthramurthy 01b9ff54d9
Add LDAP STS tests and workflow for CI (#13576)
Runs LDAP tests with openldap container on GH Actions
2021-11-04 08:16:30 -07:00
Aditya Manthramurthy 64a1904136
Remove unused GlobalServiceDoneCh (#13578) 2021-11-04 08:15:10 -07:00
Aditya Manthramurthy bce6864785
Add tests to verify default server policies (#13575)
Check that they are present and that they can be modified by user
2021-11-03 19:49:05 -07:00
Aditya Manthramurthy ecd54b4cba
Move all IAM storage functionality into iam store type (#13567)
This reverts commit 091a7ae359.

- Ensure all actions accessing storage lock properly.

- Behavior change: policies can be deleted only when they
  are not associated with any active credentials.

Also adds fix for accidental canned policy removal that was present in the
reverted version of the change.
2021-11-03 19:47:49 -07:00
Harshavardhana ca2b288a4b update console to v0.12.0 2021-11-03 16:23:45 -07:00
Harshavardhana 1016fbb8f9
feat: detect starting from windows explorer (#13570)
Windows users often click on the binary without
knowing MinIO is a command-line tool and should be
run from a terminal. Throw a message to guide them
on what to do.

Co-authored-by: Klaus Post <klauspost@gmail.com>
2021-11-03 14:22:13 -07:00
Harshavardhana be3f81c7ec
remove unused activeIOCount in single drive mode (#13574) 2021-11-03 12:29:45 -07:00
Minio Trusted 9f3c151c3c Update yaml files to latest version RELEASE.2021-11-03T03-36-36Z 2021-11-03 06:48:34 +00:00
Harshavardhana 9735f3d8f2 fix missing go.sum changes 2021-11-02 20:36:36 -07:00
Harshavardhana 34680c5ccf
fix: SQL select to honor limits properly for array queries (#13568)
added tests to cover the scenarios as well.
2021-11-02 19:14:46 -07:00
Krishna Srinivas 58934e5881
Support live updates for clients during speedtest (#13566) 2021-11-02 15:27:03 -07:00
Harshavardhana ad3f98b8e7 add util-linux RPM for setpriv command 2021-11-02 14:25:01 -07:00
Poorna K 7c33a33ef3
cache: fix commit value lookup in config (#13551) 2021-11-02 14:20:52 -07:00
Poorna K 3dfcca68e6
fix race in TestComputeActions test (#13564) 2021-11-02 14:20:15 -07:00
Harshavardhana 73b74c94a1
remove unnecessary RPMs to reduce security reports (#13565) 2021-11-02 14:15:46 -07:00
Harshavardhana 18338d60d5 treat all 2xx, 3xx as good status-codes
fixes #13560
2021-11-02 14:12:43 -07:00
Harshavardhana e106070640 update docs to mention the expected behavior for requests_max
fixes #13561
2021-11-02 14:10:21 -07:00
Harshavardhana 091a7ae359 Revert "Move all IAM storage functionality into iam store type (#13541)"
This reverts commit caadcc3ed8.
2021-11-02 13:51:42 -07:00
Krishna Srinivas 70160aeab3
Remove IOPS autotuning and simplify autotune code (#13554) 2021-11-02 13:03:00 -07:00
jandres - moscardo 1aa08f594d
Update README.md prometheus (#13514)
Modify the doc to warn users about Prometheus sending `domain:port`
2021-11-02 12:27:30 -07:00
Harshavardhana 14d8a931fe
re-use io.Copy buffers with 32k pools (#13553)
Borrowed idea from Go's usage of this
optimization for ReadFrom() on client
side, we should re-use the 32k buffers
io.Copy() allocates for generic copy
from a reader to writer.

the performance increase for reads for
really tiny objects is at this range
after this change.

> * Fastest: +7.89% (+1.3 MiB/s) throughput, +7.89% (+1308.1) obj/s
2021-11-02 08:11:50 -07:00
Harshavardhana 30ba85bc67
no need to write storageClass globally (#13555)
fixes #13548
2021-11-02 08:11:20 -07:00
Aditya Manthramurthy caadcc3ed8
Move all IAM storage functionality into iam store type (#13541)
- Ensure all actions accessing storage lock properly.

- Behavior change: policies can be deleted only when they 
  are not associated with any active credentials.
2021-11-01 21:58:07 -07:00
Poorna K 26f55472c6
fix: clean up dangling buckets during bucket delete (#13523) 2021-11-01 21:52:45 -07:00
Aditya Manthramurthy 79a58e275c
fix: race in delete user functionality (#13547)
- The race happens with a goroutine that refreshes IAM cache data from storage.
- It could lead to deleted users re-appearing as valid live credentials.
- This change also causes CI to run tests without a race flag (in addition to
running it with).
2021-11-01 15:03:07 -07:00
Aditya Manthramurthy 900e584514
CI: Cancel in-progress jobs when a PR is updated (#13552)
- This should lead to faster results as jobs will be queued for shorter periods
when PRs are updated.

- Current behavior is that previously running CI jobs for an updated PR run to
completion needlessly, and cause new CI jobs to be queued.

Ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions#concurrency
2021-11-01 13:42:48 -07:00
Harshavardhana bb639d9f29
remove double reads delete versions (#13544)
deleting collection of versions belonging
to same object, we can avoid re-reading
the xl.meta from the disk instead purge
all the requested versions in-memory,

the tradeoff is to allocate a map to de-dup
the versions, allow disks to be read only
once per object.

additionally reduce the data transfer between
nodes by shortening msgp data values.
2021-11-01 10:50:07 -07:00
Poorna K 15dcacc1fc
Add support for caching multipart in writethrough mode (#13507) 2021-11-01 08:11:58 -07:00
Harshavardhana 6d53e3c2d7
reduce number of middleware handlers (#13546)
- combine similar looking functionalities into single
  handlers, and remove unnecessary proxying of the
  requests at handler layer.

- remove bucket forwarding handler as part of default setup
  add it only if bucket federation is enabled.

Improvements observed for 1kiB object reads.
```
-------------------
Operation: GET
Operations: 4538555 -> 4595804
* Average: +1.26% (+0.2 MiB/s) throughput, +1.26% (+190.2) obj/s
* Fastest: +4.67% (+0.7 MiB/s) throughput, +4.67% (+739.8) obj/s
* 50% Median: +1.15% (+0.2 MiB/s) throughput, +1.15% (+173.9) obj/s
```
2021-11-01 08:04:03 -07:00
Klaus Post 8ed7346273
Disable AVX512 on Darwin (#13550)
Preemptively disable AVX512 until https://github.com/golang/go/issues/49233 has been resolved.

This potentially affects reedsolomon, simdjson, sha256-simd, md5-simd packages.

Init order requires a separate package since main itself is initialized last, but imports are initialized in the order they are imported from main (confirmed).
2021-11-01 08:03:16 -07:00
Harshavardhana 3c1220adca add tests for default governance replication 2021-10-30 08:57:59 -07:00
Harshavardhana 4ed0eb7012
remove double reads updating object metadata (#13542)
Removes RLock/RUnlock for updating metadata,
since we already take a write lock to update
metadata, this change removes reading of xl.meta
as well as an additional lock, the performance gain
should increase 3x theoretically for

- PutObjectRetention
- PutObjectLegalHold

This optimization is mainly for Veeam like
workloads that require a certain level of iops
from these API calls, we were losing iops.
2021-10-30 08:22:04 -07:00
Harshavardhana 2af5445309 update 3-site replication tests 2021-10-29 22:09:55 -07:00
Harshavardhana abb1916bda
update list objects limit to match S3 spec 2021-10-28 18:21:51 -07:00
Klaus Post 9424dca9e4
jwt: Improve allocations (#13532)
Avoid string -> byte allocations.

```
BenchmarkParseJWTStandardClaims-32       3527152           343.2 ns/op      1489 B/op         21 allocs/op
BenchmarkParseJWTStandardClaims-32       4713199           259.2 ns/op       706 B/op         16 allocs/op

BenchmarkParseJWTMapClaims-32        2666668           448.7 ns/op      1883 B/op         32 allocs/op
BenchmarkParseJWTMapClaims-32        3120709           377.1 ns/op      1227 B/op         28 allocs/op
```
2021-10-28 17:04:48 -07:00
Harshavardhana db84bb9bd3
avoid atomics for self contained reader/writers (#13531)
read/writers are not concurrent in handlers
and self contained - no need to use atomics on
them.

avoids unnecessary contentions where it's not
required.
2021-10-28 17:03:00 -07:00
Klaus Post c603f85488
readAllData: Reuse small file buffers (#13530)
(Re)use small buffers for small readAllData operations.
2021-10-28 17:02:22 -07:00
Aditya Manthramurthy 2f1ee25f50
Add test for AssumeRole with internal IDP (#13527) 2021-10-28 09:05:51 -07:00
Klaus Post 7bdf9005e5
Remove HTTP flushes for returning handlers (#13528)
When handlers return they are automatically flushed. Manual flushing can force responsewriters to use suboptimal paths and generally just wastes CPU.
2021-10-28 07:36:34 -07:00
Klaus Post d9c1d79e30
Protect logger targets (#13529)
Logger targets were not race protected against concurrent updates from for example `HTTPConsoleLoggerSys`.

Restrict direct access to targets and make slices immutable so a returned slice can be processed safely without locks.
2021-10-28 07:35:28 -07:00
Harshavardhana bd88b86919 update console to latest to fix CVE-2021-42836 2021-10-27 21:14:02 -07:00
Minio Trusted 8e29ae8c44 Update yaml files to latest version RELEASE.2021-10-27T16-29-42Z 2021-10-28 02:45:22 +00:00
133 changed files with 6899 additions and 3996 deletions

View file

@ -1,13 +1,19 @@
name: Go
name: Crosscompile
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
jobs:
build:
name: MinIO crosscompile tests on ${{ matrix.go-version }} and ${{ matrix.os }}
name: Build Tests with Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:

View file

@ -1,13 +1,19 @@
name: Go
name: Linters and Tests
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
jobs:
build:
name: MinIO tests on ${{ matrix.go-version }} and ${{ matrix.os }}
name: Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
@ -39,4 +45,5 @@ jobs:
curl -L -o nancy https://github.com/sonatype-nexus-community/nancy/releases/download/${nancy_version}/nancy-${nancy_version}-linux-amd64 && chmod +x nancy
go list -deps -json ./... | jq -s 'unique_by(.Module.Path)|.[]|select(has("Module"))|.Module' | ./nancy sleuth
make
make test
make test-race

View file

@ -1,13 +1,19 @@
name: Go
name: Functional Tests
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
jobs:
build:
name: MinIO Setup on ${{ matrix.go-version }} and ${{ matrix.os }}
name: Go ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:

125
.github/workflows/iam-integrations.yaml vendored Normal file
View file

@ -0,0 +1,125 @@
name: IAM integration with external systems
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
jobs:
ldap-test:
name: LDAP Tests with Go ${{ matrix.go-version }}
runs-on: ubuntu-latest
services:
openldap:
image: quay.io/minio/openldap
ports:
- "389:389"
- "636:636"
env:
LDAP_ORGANIZATION: "MinIO Inc"
LDAP_DOMAIN: "min.io"
LDAP_ADMIN_PASSWORD: "admin"
strategy:
matrix:
go-version: [1.16.x, 1.17.x]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Test LDAP
env:
LDAP_TEST_SERVER: "localhost:389"
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-iam
etcd-test:
name: Etcd Backend Tests with Go ${{ matrix.go-version }}
runs-on: ubuntu-latest
services:
etcd:
image: "quay.io/coreos/etcd:v3.5.1"
env:
ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS: "http://0.0.0.0:2379"
ports:
- "2379:2379"
options: >-
--health-cmd "etcdctl endpoint health"
--health-interval 10s
--health-timeout 5s
--health-retries 5
strategy:
matrix:
go-version: [1.16.x, 1.17.x]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Test Etcd IAM backend
env:
ETCD_SERVER: "http://localhost:2379"
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-iam
iam-etcd-test:
name: Etcd Backend + LDAP Tests with Go ${{ matrix.go-version }}
runs-on: ubuntu-latest
services:
openldap:
image: quay.io/minio/openldap
ports:
- "389:389"
- "636:636"
env:
LDAP_ORGANIZATION: "MinIO Inc"
LDAP_DOMAIN: "min.io"
LDAP_ADMIN_PASSWORD: "admin"
etcd:
image: "quay.io/coreos/etcd:v3.5.1"
env:
ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS: "http://0.0.0.0:2379"
ports:
- "2379:2379"
options: >-
--health-cmd "etcdctl endpoint health"
--health-interval 10s
--health-timeout 5s
--health-retries 5
strategy:
matrix:
go-version: [1.16.x, 1.17.x]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Test Etcd IAM backend with LDAP IDP
env:
ETCD_SERVER: "http://localhost:2379"
LDAP_TEST_SERVER: "localhost:389"
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
make test-iam

33
.github/workflows/replication.yaml vendored Normal file
View file

@ -0,0 +1,33 @@
name: Multi-site replication tests
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
jobs:
replication-test:
name: Replication Tests with Go ${{ matrix.go-version }}
runs-on: ubuntu-latest
strategy:
matrix:
go-version: [1.16.x, 1.17.x]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Test Replication
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
go install -v github.com/minio/mc@latest
make test-replication

1110
CREDITS

File diff suppressed because it is too large Load diff

View file

@ -27,8 +27,9 @@ COPY CREDITS /licenses/CREDITS
COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux iproute iputils --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
@ -39,7 +40,8 @@ RUN \
chmod +x /opt/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/verify-minio.sh && \
/usr/bin/verify-minio.sh
/usr/bin/verify-minio.sh && \
microdnf clean all
EXPOSE 9000

View file

@ -27,8 +27,9 @@ COPY CREDITS /licenses/CREDITS
COPY LICENSE /licenses/LICENSE
RUN \
microdnf clean all && \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux iproute iputils --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
mkdir -p /opt/bin && chmod -R 777 /opt/bin && \
@ -39,7 +40,8 @@ RUN \
chmod +x /opt/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \
chmod +x /usr/bin/verify-minio.sh && \
/usr/bin/verify-minio.sh
/usr/bin/verify-minio.sh && \
microdnf clean all
EXPOSE 9000

View file

@ -20,8 +20,8 @@ help: ## print this help
getdeps: ## fetch necessary dependencies
@mkdir -p ${GOPATH}/bin
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin v1.40.1
@which msgp 1>/dev/null || (echo "Installing msgp" && go install -v github.com/tinylib/msgp@v1.1.3)
@which stringer 1>/dev/null || (echo "Installing stringer" && go install -v golang.org/x/tools/cmd/stringer)
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@latest
@echo "Installing stringer" && go install -v golang.org/x/tools/cmd/stringer@latest
crosscompile: ## cross compile minio
@(env bash $(PWD)/buildscripts/cross-compile.sh)
@ -40,12 +40,22 @@ lint: ## runs golangci-lint suite of linters
check: test
test: verifiers build ## builds minio, runs linters, tests
@echo "Running unit tests"
@GOGC=25 GO111MODULE=on CGO_ENABLED=0 go test -tags kqueue ./... 1>/dev/null
@GO111MODULE=on CGO_ENABLED=0 go test -tags kqueue ./... 1>/dev/null
test-race: verifiers build
test-race: verifiers build ## builds minio, runs linters, tests (race)
@echo "Running unit tests under -race"
@(env bash $(PWD)/buildscripts/race.sh)
test-iam: build ## verify IAM (external IDP, etcd backends)
@echo "Running tests for IAM (external IDP, etcd backends)"
@CGO_ENABLED=0 go test -tags kqueue -v -run TestIAM* ./cmd
@echo "Running tests for IAM (external IDP, etcd backends) with -race"
@CGO_ENABLED=1 go test -race -tags kqueue -v -run TestIAM* ./cmd
test-replication: install ## verify multi site replication
@echo "Running tests for Replication three sites"
@(env bash $(PWD)/docs/bucket/replication/setup_3site_replication.sh)
verify: ## verify minio various setups
@echo "Verifying build with race"
@GO111MODULE=on CGO_ENABLED=1 go build -race -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null

View file

@ -114,8 +114,6 @@ func (api objectAPIHandlers) PutBucketACLHandler(w http.ResponseWriter, r *http.
writeErrorResponse(ctx, w, toAPIError(ctx, NotImplemented{}), r.URL)
return
}
w.(http.Flusher).Flush()
}
// GetBucketACLHandler - GET Bucket ACL
@ -164,8 +162,6 @@ func (api objectAPIHandlers) GetBucketACLHandler(w http.ResponseWriter, r *http.
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
w.(http.Flusher).Flush()
}
// PutObjectACLHandler - PUT Object ACL
@ -229,8 +225,6 @@ func (api objectAPIHandlers) PutObjectACLHandler(w http.ResponseWriter, r *http.
writeErrorResponse(ctx, w, toAPIError(ctx, NotImplemented{}), r.URL)
return
}
w.(http.Flusher).Flush()
}
// GetObjectACLHandler - GET Object ACL
@ -283,6 +277,4 @@ func (api objectAPIHandlers) GetObjectACLHandler(w http.ResponseWriter, r *http.
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
w.(http.Flusher).Flush()
}

View file

@ -103,6 +103,12 @@ func toAdminAPIErr(ctx context.Context, err error) APIError {
Description: err.Error(),
HTTPStatusCode: http.StatusServiceUnavailable,
}
case errors.Is(err, errPolicyInUse):
apiErr = APIError{
Code: "XMinioAdminPolicyInUse",
Description: "The policy cannot be removed, as it is in use",
HTTPStatusCode: http.StatusBadRequest,
}
case errors.Is(err, kes.ErrKeyExists):
apiErr = APIError{
Code: "XMinioKMSKeyExists",

View file

@ -19,6 +19,7 @@ package cmd
import (
"bytes"
"context"
"encoding/json"
"io"
"net/http"
@ -78,6 +79,22 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
dynamic := config.SubSystemsDynamic.Contains(string(kvBytes))
if dynamic {
applyDynamic(ctx, objectAPI, cfg, r, w)
}
}
func applyDynamic(ctx context.Context, objectAPI ObjectLayer, cfg config.Config, r *http.Request, w http.ResponseWriter) {
// Apply dynamic values.
if err := applyDynamicConfig(GlobalContext, objectAPI, cfg); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
globalNotificationSys.SignalService(serviceReloadDynamic)
// Tell the client that dynamic config was applied.
w.Header().Set(madmin.ConfigAppliedHeader, madmin.ConfigAppliedTrue)
}
// SetConfigKVHandler - PUT /minio/admin/v3/set-config-kv
@ -135,14 +152,7 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
}
if dynamic {
// Apply dynamic values.
if err := applyDynamicConfig(GlobalContext, objectAPI, cfg); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
globalNotificationSys.SignalService(serviceReloadDynamic)
// If all values were dynamic, tell the client.
w.Header().Set(madmin.ConfigAppliedHeader, madmin.ConfigAppliedTrue)
applyDynamic(ctx, objectAPI, cfg, r, w)
}
writeSuccessResponseHeadersOnly(w)
}
@ -326,7 +336,6 @@ func (a adminAPIHandlers) HelpConfigKVHandler(w http.ResponseWriter, r *http.Req
}
json.NewEncoder(w).Encode(rd)
w.(http.Flusher).Flush()
}
// SetConfigHandler - PUT /minio/admin/v3/config

View file

@ -269,8 +269,6 @@ func (a adminAPIHandlers) SiteReplicationInfo(w http.ResponseWriter, r *http.Req
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
w.(http.Flusher).Flush()
}
func (a adminAPIHandlers) SRInternalGetIDPSettings(w http.ResponseWriter, r *http.Request) {
@ -288,8 +286,6 @@ func (a adminAPIHandlers) SRInternalGetIDPSettings(w http.ResponseWriter, r *htt
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
w.(http.Flusher).Flush()
}
func readJSONBody(ctx context.Context, body io.Reader, v interface{}, encryptionKey string) APIErrorCode {

View file

@ -0,0 +1,141 @@
// Copyright (c) 2015-2021 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
//go:build !race
// +build !race
// Tests in this file are not run under the `-race` flag as they are too slow
// and cause context deadline errors.
package cmd
import (
"context"
"fmt"
"sync"
"testing"
"time"
"github.com/minio/madmin-go"
minio "github.com/minio/minio-go/v7"
)
func runAllIAMConcurrencyTests(suite *TestSuiteIAM, c *check) {
suite.SetUpSuite(c)
suite.TestDeleteUserRace(c)
suite.TearDownSuite(c)
}
func TestIAMInternalIDPConcurrencyServerSuite(t *testing.T) {
baseTestCases := []TestSuiteCommon{
// Init and run test on FS backend with signature v4.
{serverType: "FS", signer: signerV4},
// Init and run test on FS backend, with tls enabled.
{serverType: "FS", signer: signerV4, secure: true},
// Init and run test on Erasure backend.
{serverType: "Erasure", signer: signerV4},
// Init and run test on ErasureSet backend.
{serverType: "ErasureSet", signer: signerV4},
}
testCases := []*TestSuiteIAM{}
for _, bt := range baseTestCases {
testCases = append(testCases,
newTestSuiteIAM(bt, false),
newTestSuiteIAM(bt, true),
)
}
for i, testCase := range testCases {
etcdStr := ""
if testCase.withEtcdBackend {
etcdStr = " (with etcd backend)"
}
t.Run(
fmt.Sprintf("Test: %d, ServerType: %s%s", i+1, testCase.serverType, etcdStr),
func(t *testing.T) {
runAllIAMConcurrencyTests(testCase, &check{t, testCase.serverType})
},
)
}
}
func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
bucket := getRandomBucketName()
err := s.client.MakeBucket(ctx, bucket, minio.MakeBucketOptions{})
if err != nil {
c.Fatalf("bucket creat error: %v", err)
}
// Create a policy policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
}
userCount := 50
accessKeys := make([]string, userCount)
secretKeys := make([]string, userCount)
for i := 0; i < userCount; i++ {
accessKey, secretKey := mustGenerateCredentials(c)
err = s.adm.SetUser(ctx, accessKey, secretKey, madmin.AccountEnabled)
if err != nil {
c.Fatalf("Unable to set user: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
}
accessKeys[i] = accessKey
secretKeys[i] = secretKey
}
wg := sync.WaitGroup{}
for i := 0; i < userCount; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
uClient := s.getUserClient(c, accessKeys[i], secretKeys[i], "")
err := s.adm.RemoveUser(ctx, accessKeys[i])
if err != nil {
c.Fatalf("unable to remove user: %v", err)
}
c.mustNotListObjects(ctx, uClient, bucket)
}(i)
}
wg.Wait()
}

View file

@ -1307,9 +1307,6 @@ func (a adminAPIHandlers) ListBucketPolicies(w http.ResponseWriter, r *http.Requ
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
w.(http.Flusher).Flush()
}
// ListCannedPolicies - GET /minio/admin/v3/list-canned-policies
@ -1342,8 +1339,6 @@ func (a adminAPIHandlers) ListCannedPolicies(w http.ResponseWriter, r *http.Requ
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
w.(http.Flusher).Flush()
}
// RemoveCannedPolicy - DELETE /minio/admin/v3/remove-canned-policy?name=<policy_name>

View file

@ -20,6 +20,7 @@ package cmd
import (
"context"
"fmt"
"os"
"strings"
"testing"
"time"
@ -32,25 +33,26 @@ import (
)
const (
testDefaultTimeout = 10 * time.Second
testDefaultTimeout = 30 * time.Second
)
// API suite container for IAM
type TestSuiteIAM struct {
TestSuiteCommon
// Flag to turn on tests for etcd backend IAM
withEtcdBackend bool
endpoint string
adm *madmin.AdminClient
client *minio.Client
}
func newTestSuiteIAM(c TestSuiteCommon) *TestSuiteIAM {
return &TestSuiteIAM{TestSuiteCommon: c}
func newTestSuiteIAM(c TestSuiteCommon, withEtcdBackend bool) *TestSuiteIAM {
return &TestSuiteIAM{TestSuiteCommon: c, withEtcdBackend: withEtcdBackend}
}
func (s *TestSuiteIAM) SetUpSuite(c *check) {
s.TestSuiteCommon.SetUpSuite(c)
func (s *TestSuiteIAM) iamSetup(c *check) {
var err error
// strip url scheme from endpoint
s.endpoint = strings.TrimPrefix(s.endPoint, "http://")
@ -75,6 +77,50 @@ func (s *TestSuiteIAM) SetUpSuite(c *check) {
}
}
const (
EnvTestEtcdBackend = "ETCD_SERVER"
)
func (s *TestSuiteIAM) setUpEtcd(c *check, etcdServer string) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
configCmds := []string{
"etcd",
"endpoints=" + etcdServer,
"path_prefix=" + mustGetUUID(),
}
_, err := s.adm.SetConfigKV(ctx, strings.Join(configCmds, " "))
if err != nil {
c.Fatalf("unable to setup Etcd for tests: %v", err)
}
s.RestartIAMSuite(c)
}
func (s *TestSuiteIAM) SetUpSuite(c *check) {
// If etcd backend is specified and etcd server is not present, the test
// is skipped.
etcdServer := os.Getenv(EnvTestEtcdBackend)
if s.withEtcdBackend && etcdServer == "" {
c.Skip("Skipping etcd backend IAM test as no etcd server is configured.")
}
s.TestSuiteCommon.SetUpSuite(c)
s.iamSetup(c)
if s.withEtcdBackend {
s.setUpEtcd(c, etcdServer)
}
}
func (s *TestSuiteIAM) RestartIAMSuite(c *check) {
s.TestSuiteCommon.RestartTestServer(c)
s.iamSetup(c)
}
func (s *TestSuiteIAM) getUserClient(c *check, accessKey, secretKey, sessionToken string) *minio.Client {
client, err := minio.New(s.endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKey, secretKey, sessionToken),
@ -91,26 +137,41 @@ func runAllIAMTests(suite *TestSuiteIAM, c *check) {
suite.SetUpSuite(c)
suite.TestUserCreate(c)
suite.TestPolicyCreate(c)
suite.TestCannedPolicies(c)
suite.TestGroupAddRemove(c)
suite.TestServiceAccountOps(c)
suite.TearDownSuite(c)
}
func TestIAMInternalIDPServerSuite(t *testing.T) {
testCases := []*TestSuiteIAM{
baseTestCases := []TestSuiteCommon{
// Init and run test on FS backend with signature v4.
newTestSuiteIAM(TestSuiteCommon{serverType: "FS", signer: signerV4}),
{serverType: "FS", signer: signerV4},
// Init and run test on FS backend, with tls enabled.
newTestSuiteIAM(TestSuiteCommon{serverType: "FS", signer: signerV4, secure: true}),
{serverType: "FS", signer: signerV4, secure: true},
// Init and run test on Erasure backend.
newTestSuiteIAM(TestSuiteCommon{serverType: "Erasure", signer: signerV4}),
{serverType: "Erasure", signer: signerV4},
// Init and run test on ErasureSet backend.
newTestSuiteIAM(TestSuiteCommon{serverType: "ErasureSet", signer: signerV4}),
{serverType: "ErasureSet", signer: signerV4},
}
testCases := []*TestSuiteIAM{}
for _, bt := range baseTestCases {
testCases = append(testCases,
newTestSuiteIAM(bt, false),
newTestSuiteIAM(bt, true),
)
}
for i, testCase := range testCases {
t.Run(fmt.Sprintf("Test: %d, ServerType: %s", i+1, testCase.serverType), func(t *testing.T) {
runAllIAMTests(testCase, &check{t, testCase.serverType})
})
etcdStr := ""
if testCase.withEtcdBackend {
etcdStr = " (with etcd backend)"
}
t.Run(
fmt.Sprintf("Test: %d, ServerType: %s%s", i+1, testCase.serverType, etcdStr),
func(t *testing.T) {
runAllIAMTests(testCase, &check{t, testCase.serverType})
},
)
}
}
@ -258,13 +319,87 @@ func (s *TestSuiteIAM) TestPolicyCreate(c *check) {
c.Fatalf("policy was missing!")
}
// 5. Check that policy can be deleted.
// 5. Check that policy cannot be deleted when attached to a user.
err = s.adm.RemoveCannedPolicy(ctx, policy)
if err == nil {
c.Fatalf("policy could be unexpectedly deleted!")
}
// 6. Delete the user and then delete the policy.
err = s.adm.RemoveUser(ctx, accessKey)
if err != nil {
c.Fatalf("user could not be deleted: %v", err)
}
err = s.adm.RemoveCannedPolicy(ctx, policy)
if err != nil {
c.Fatalf("policy delete err: %v", err)
c.Fatalf("policy del err: %v", err)
}
}
func (s *TestSuiteIAM) TestCannedPolicies(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
policies, err := s.adm.ListCannedPolicies(ctx)
if err != nil {
c.Fatalf("unable to list policies: %v", err)
}
defaultPolicies := []string{
"readwrite",
"readonly",
"writeonly",
"diagnostics",
"consoleAdmin",
}
for _, v := range defaultPolicies {
if _, ok := policies[v]; !ok {
c.Fatalf("Failed to find %s in policies list", v)
}
}
bucket := getRandomBucketName()
err = s.client.MakeBucket(ctx, bucket, minio.MakeBucketOptions{})
if err != nil {
c.Fatalf("bucket creat error: %v", err)
}
policyBytes := []byte(fmt.Sprintf(`{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
// Check that default policies can be overwritten.
err = s.adm.AddCannedPolicy(ctx, "readwrite", policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
}
info, err := s.adm.InfoCannedPolicy(ctx, "readwrite")
if err != nil {
c.Fatalf("policy info err: %v", err)
}
infoStr := string(info)
if !strings.Contains(infoStr, `"s3:PutObject"`) || !strings.Contains(infoStr, ":"+bucket+"/") {
c.Fatalf("policy contains unexpected content!")
}
}
func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
@ -627,7 +762,8 @@ func (c *check) mustListObjects(ctx context.Context, client *minio.Client, bucke
res := client.ListObjects(ctx, bucket, minio.ListObjectsOptions{})
v, ok := <-res
if ok && v.Err != nil {
c.Fatalf("user was unable to list unexpectedly!")
msg := fmt.Sprintf("user was unable to list: %v", v.Err)
c.Fatalf(msg)
}
}

View file

@ -951,54 +951,40 @@ func (a adminAPIHandlers) SpeedtestHandler(w http.ResponseWriter, r *http.Reques
duration = time.Second * 10
}
throughputSize := size
iopsSize := size
if autotune {
iopsSize = 4 * humanize.KiByte
deleteBucket := func() {
loc := pathJoin(minioMetaSpeedTestBucket, minioMetaSpeedTestBucketPrefix)
objectAPI.DeleteBucket(context.Background(), loc, DeleteBucketOptions{
Force: true,
NoRecreate: true,
})
}
keepAliveTicker := time.NewTicker(500 * time.Millisecond)
defer keepAliveTicker.Stop()
endBlankRepliesCh := make(chan error)
go func() {
for {
select {
case <-ctx.Done():
endBlankRepliesCh <- nil
return
case <-keepAliveTicker.C:
// Write a blank entry to prevent client from disconnecting
if err := json.NewEncoder(w).Encode(madmin.SpeedTestResult{}); err != nil {
endBlankRepliesCh <- err
return
}
w.(http.Flusher).Flush()
case endBlankRepliesCh <- nil:
enc := json.NewEncoder(w)
ch := speedTest(ctx, size, concurrent, duration, autotune)
for {
select {
case <-ctx.Done():
return
case <-keepAliveTicker.C:
// Write a blank entry to prevent client from disconnecting
if err := enc.Encode(madmin.SpeedTestResult{}); err != nil {
return
}
w.(http.Flusher).Flush()
case result, ok := <-ch:
if !ok {
deleteBucket()
return
}
if err := enc.Encode(result); err != nil {
return
}
w.(http.Flusher).Flush()
}
}()
result, err := speedTest(ctx, throughputSize, iopsSize, concurrent, duration, autotune)
if <-endBlankRepliesCh != nil {
return
}
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err := json.NewEncoder(w).Encode(result); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
objectAPI.DeleteBucket(ctx, pathJoin(minioMetaSpeedTestBucket, minioMetaSpeedTestBucketPrefix), DeleteBucketOptions{Force: true, NoRecreate: true})
w.(http.Flusher).Flush()
}
// Admin API errors
@ -1943,13 +1929,15 @@ func (a adminAPIHandlers) BandwidthMonitorHandler(w http.ResponseWriter, r *http
}
}
}()
enc := json.NewEncoder(w)
for {
select {
case report, ok := <-reportCh:
if !ok {
return
}
if err := json.NewEncoder(w).Encode(report); err != nil {
if err := enc.Encode(report); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
@ -2096,7 +2084,7 @@ func fetchKMSStatus() madmin.KMS {
func fetchLoggerInfo() ([]madmin.Logger, []madmin.Audit) {
var loggerInfo []madmin.Logger
var auditloggerInfo []madmin.Audit
for _, target := range logger.Targets {
for _, target := range logger.Targets() {
if target.Endpoint() != "" {
tgt := target.String()
err := checkConnection(target.Endpoint(), 15*time.Second)
@ -2112,7 +2100,7 @@ func fetchLoggerInfo() ([]madmin.Logger, []madmin.Audit) {
}
}
for _, target := range logger.AuditTargets {
for _, target := range logger.AuditTargets() {
if target.Endpoint() != "" {
tgt := target.String()
err := checkConnection(target.Endpoint(), 15*time.Second)

View file

@ -1362,8 +1362,8 @@ var errorCodes = errorCodeMap{
},
ErrBackendDown: {
Code: "XMinioBackendDown",
Description: "Object storage backend is unreachable",
HTTPStatusCode: http.StatusServiceUnavailable,
Description: "Remote backend is unreachable",
HTTPStatusCode: http.StatusBadRequest,
},
ErrIncorrectContinuationToken: {
Code: "InvalidArgument",
@ -1956,6 +1956,8 @@ func toAPIErrorCode(ctx context.Context, err error) (apiErr APIErrorCode) {
apiErr = ErrNoSuchKey
case MethodNotAllowed:
apiErr = ErrMethodNotAllowed
case ObjectLocked:
apiErr = ErrObjectLocked
case InvalidVersionID:
apiErr = ErrInvalidVersionID
case VersionNotFound:
@ -2127,6 +2129,11 @@ func toAPIError(ctx context.Context, err error) APIError {
}
}
if apiErr.Code == "XMinioBackendDown" {
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, err)
return apiErr
}
if apiErr.Code == "InternalError" {
// If we see an internal error try to interpret
// any underlying errors if possible depending on

View file

@ -740,7 +740,6 @@ func writeResponse(w http.ResponseWriter, statusCode int, response []byte, mType
w.WriteHeader(statusCode)
if response != nil {
w.Write(response)
w.(http.Flusher).Flush()
}
}

View file

@ -487,6 +487,26 @@ func setAuthHandler(h http.Handler) http.Handler {
// handler for validating incoming authorization headers.
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
aType := getRequestAuthType(r)
if aType == authTypeSigned || aType == authTypeSignedV2 || aType == authTypeStreamingSigned {
// Verify if date headers are set, if not reject the request
amzDate, errCode := parseAmzDateHeader(r)
if errCode != ErrNone {
// All our internal APIs are sensitive towards Date
// header, for all requests where Date header is not
// present we will reject such clients.
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(errCode), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsTime, 1)
return
}
// Verify if the request date header is shifted by less than globalMaxSkewTime parameter in the past
// or in the future, reject request otherwise.
curTime := UTCNow()
if curTime.Sub(amzDate) > globalMaxSkewTime || amzDate.Sub(curTime) > globalMaxSkewTime {
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrRequestTimeTooSkewed), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsTime, 1)
return
}
}
if isSupportedS3AuthType(aType) || aType == authTypeJWT || aType == authTypeSTS {
h.ServeHTTP(w, r)
return

View file

@ -123,7 +123,6 @@ func (b *bootstrapRESTServer) VerifyHandler(w http.ResponseWriter, r *http.Reque
ctx := newContext(r, w, "VerifyHandler")
cfg := getServerSystemCfg()
logger.LogIf(ctx, json.NewEncoder(w).Encode(&cfg))
w.(http.Flusher).Flush()
}
// registerBootstrapRESTHandlers - register bootstrap rest router.

View file

@ -175,68 +175,76 @@ func enforceRetentionBypassForDelete(ctx context.Context, r *http.Request, bucke
// For objects in "Governance" mode, overwrite is allowed if a) object retention date is past OR
// governance bypass headers are set and user has governance bypass permissions.
// Objects in compliance mode can be overwritten only if retention date is being extended. No mode change is permitted.
func enforceRetentionBypassForPut(ctx context.Context, r *http.Request, bucket, object string, getObjectInfoFn GetObjectInfoFn, objRetention *objectlock.ObjectRetention, cred auth.Credentials, owner bool) (ObjectInfo, APIErrorCode) {
func enforceRetentionBypassForPut(ctx context.Context, r *http.Request, oi ObjectInfo, objRetention *objectlock.ObjectRetention, cred auth.Credentials, owner bool) error {
byPassSet := objectlock.IsObjectLockGovernanceBypassSet(r.Header)
opts, err := getOpts(ctx, r, bucket, object)
if err != nil {
return ObjectInfo{}, toAPIErrorCode(ctx, err)
}
oi, err := getObjectInfoFn(ctx, bucket, object, opts)
if err != nil {
return oi, toAPIErrorCode(ctx, err)
}
t, err := objectlock.UTCNowNTP()
if err != nil {
logger.LogIf(ctx, err)
return oi, ErrObjectLocked
return ObjectLocked{Bucket: oi.Bucket, Object: oi.Name, VersionID: oi.VersionID}
}
// Pass in relative days from current time, to additionally to verify "object-lock-remaining-retention-days" policy if any.
// Pass in relative days from current time, to additionally
// to verify "object-lock-remaining-retention-days" policy if any.
days := int(math.Ceil(math.Abs(objRetention.RetainUntilDate.Sub(t).Hours()) / 24))
ret := objectlock.GetObjectRetentionMeta(oi.UserDefined)
if ret.Mode.Valid() {
// Retention has expired you may change whatever you like.
if ret.RetainUntilDate.Before(t) {
perm := isPutRetentionAllowed(bucket, object,
apiErr := isPutRetentionAllowed(oi.Bucket, oi.Name,
days, objRetention.RetainUntilDate.Time,
objRetention.Mode, byPassSet, r, cred,
owner)
return oi, perm
switch apiErr {
case ErrAccessDenied:
return errAuthentication
}
return nil
}
switch ret.Mode {
case objectlock.RetGovernance:
govPerm := isPutRetentionAllowed(bucket, object, days,
govPerm := isPutRetentionAllowed(oi.Bucket, oi.Name, days,
objRetention.RetainUntilDate.Time, objRetention.Mode,
byPassSet, r, cred, owner)
// Governance mode retention period cannot be shortened, if x-amz-bypass-governance is not set.
if !byPassSet {
if objRetention.Mode != objectlock.RetGovernance || objRetention.RetainUntilDate.Before((ret.RetainUntilDate.Time)) {
return oi, ErrObjectLocked
return ObjectLocked{Bucket: oi.Bucket, Object: oi.Name, VersionID: oi.VersionID}
}
}
return oi, govPerm
switch govPerm {
case ErrAccessDenied:
return errAuthentication
}
return nil
case objectlock.RetCompliance:
// Compliance retention mode cannot be changed or shortened.
// https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-modes
if objRetention.Mode != objectlock.RetCompliance || objRetention.RetainUntilDate.Before((ret.RetainUntilDate.Time)) {
return oi, ErrObjectLocked
return ObjectLocked{Bucket: oi.Bucket, Object: oi.Name, VersionID: oi.VersionID}
}
compliancePerm := isPutRetentionAllowed(bucket, object,
apiErr := isPutRetentionAllowed(oi.Bucket, oi.Name,
days, objRetention.RetainUntilDate.Time, objRetention.Mode,
false, r, cred, owner)
return oi, compliancePerm
switch apiErr {
case ErrAccessDenied:
return errAuthentication
}
return nil
}
return oi, ErrNone
return nil
} // No pre-existing retention metadata present.
perm := isPutRetentionAllowed(bucket, object,
apiErr := isPutRetentionAllowed(oi.Bucket, oi.Name,
days, objRetention.RetainUntilDate.Time,
objRetention.Mode, byPassSet, r, cred, owner)
return oi, perm
switch apiErr {
case ErrAccessDenied:
return errAuthentication
}
return nil
}
// checkPutObjectLockAllowed enforces object retention policy and legal hold policy

View file

@ -910,24 +910,24 @@ func replicateObject(ctx context.Context, ri ReplicateObjectInfo, objectAPI Obje
// metadata should be updated with last resync timestamp.
if objInfo.ReplicationStatusInternal != newReplStatusInternal || rinfos.ReplicationResynced() {
popts := ObjectOptions{
MTime: objInfo.ModTime,
VersionID: objInfo.VersionID,
UserDefined: make(map[string]string, len(objInfo.UserDefined)),
}
for k, v := range objInfo.UserDefined {
popts.UserDefined[k] = v
}
popts.UserDefined[ReservedMetadataPrefixLower+ReplicationStatus] = newReplStatusInternal
popts.UserDefined[ReservedMetadataPrefixLower+ReplicationTimestamp] = UTCNow().Format(time.RFC3339Nano)
popts.UserDefined[xhttp.AmzBucketReplicationStatus] = string(rinfos.ReplicationStatus())
for _, rinfo := range rinfos.Targets {
if rinfo.ResyncTimestamp != "" {
popts.UserDefined[targetResetHeader(rinfo.Arn)] = rinfo.ResyncTimestamp
}
}
if objInfo.UserTags != "" {
popts.UserDefined[xhttp.AmzObjectTagging] = objInfo.UserTags
MTime: objInfo.ModTime,
VersionID: objInfo.VersionID,
EvalMetadataFn: func(oi ObjectInfo) error {
oi.UserDefined[ReservedMetadataPrefixLower+ReplicationStatus] = newReplStatusInternal
oi.UserDefined[ReservedMetadataPrefixLower+ReplicationTimestamp] = UTCNow().Format(time.RFC3339Nano)
oi.UserDefined[xhttp.AmzBucketReplicationStatus] = string(rinfos.ReplicationStatus())
for _, rinfo := range rinfos.Targets {
if rinfo.ResyncTimestamp != "" {
oi.UserDefined[targetResetHeader(rinfo.Arn)] = rinfo.ResyncTimestamp
}
}
if objInfo.UserTags != "" {
oi.UserDefined[xhttp.AmzObjectTagging] = objInfo.UserTags
}
return nil
},
}
if _, err = objectAPI.PutObjectMetadata(ctx, bucket, object, popts); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to update replication metadata for %s/%s(%s): %w",
bucket, objInfo.Name, objInfo.VersionID, err))

View file

@ -38,6 +38,7 @@ import (
fcolor "github.com/fatih/color"
"github.com/go-openapi/loads"
"github.com/inconshreveable/mousetrap"
dns2 "github.com/miekg/dns"
"github.com/minio/cli"
consoleCerts "github.com/minio/console/pkg/certs"
@ -66,6 +67,17 @@ var serverDebugLog = env.Get("_MINIO_SERVER_DEBUG", config.EnableOff) == config.
var defaultAWSCredProvider []credentials.Provider
func init() {
if runtime.GOOS == "windows" {
if mousetrap.StartedByExplorer() {
fmt.Printf("Don't double-click %s\n", os.Args[0])
fmt.Println("You need to open cmd.exe/PowerShell and run it from the command line")
fmt.Println("Refer to the docs here on how to run it as a Windows Service https://github.com/minio/minio-service/tree/master/windows")
fmt.Println("Press the Enter Key to Exit")
fmt.Scanln()
os.Exit(1)
}
}
rand.Seed(time.Now().UTC().UnixNano())
logger.Init(GOPATH, GOROOT)

View file

@ -25,29 +25,21 @@ const crossDomainXML = `<?xml version="1.0"?><!DOCTYPE cross-domain-policy SYSTE
// Standard path where an app would find cross domain policy information.
const crossDomainXMLEntity = "/crossdomain.xml"
// Cross domain policy implements http.Handler interface, implementing a custom ServerHTTP.
type crossDomainPolicy struct {
handler http.Handler
}
// A cross-domain policy file is an XML document that grants a web client, such as Adobe Flash Player
// or Adobe Acrobat (though not necessarily limited to these), permission to handle data across domains.
// When clients request content hosted on a particular source domain and that content make requests
// directed towards a domain other than its own, the remote domain needs to host a cross-domain
// policy file that grants access to the source domain, allowing the client to continue the transaction.
func setCrossDomainPolicy(h http.Handler) http.Handler {
return crossDomainPolicy{handler: h}
}
func (c crossDomainPolicy) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Look for 'crossdomain.xml' in the incoming request.
switch r.URL.Path {
case crossDomainXMLEntity:
// Write the standard cross domain policy xml.
w.Write([]byte(crossDomainXML))
// Request completed, no need to serve to other handlers.
return
}
// Continue to serve the request further.
c.handler.ServeHTTP(w, r)
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Look for 'crossdomain.xml' in the incoming request.
switch r.URL.Path {
case crossDomainXMLEntity:
// Write the standard cross domain policy xml.
w.Write([]byte(crossDomainXML))
// Request completed, no need to serve to other handlers.
return
}
h.ServeHTTP(w, r)
})
}

View file

@ -22,6 +22,7 @@ import (
"context"
"encoding/binary"
"errors"
"io/fs"
"math"
"math/rand"
"net/http"
@ -816,14 +817,13 @@ func (f *folderScanner) scanFolder(ctx context.Context, folder cachedFolder, int
// scannerItem represents each file while walking.
type scannerItem struct {
Path string
Typ os.FileMode
Path string
bucket string // Bucket.
prefix string // Only the prefix if any, does not have final object name.
objectName string // Only the object name without prefixes.
lifeCycle *lifecycle.Lifecycle
replication replicationConfig
lifeCycle *lifecycle.Lifecycle
Typ fs.FileMode
heal bool // Has the object been selected for heal check?
debug bool
}

View file

@ -29,6 +29,8 @@ import (
"io/ioutil"
"net/http"
"os"
"path"
"strconv"
"strings"
"sync"
"sync/atomic"
@ -39,6 +41,7 @@ import (
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/disk"
"github.com/minio/minio/internal/fips"
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
xioutil "github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/kms"
@ -48,13 +51,18 @@ import (
const (
// cache.json object metadata for cached objects.
cacheMetaJSONFile = "cache.json"
cacheDataFile = "part.1"
cacheMetaVersion = "1.0.0"
cacheExpiryDays = 90 * time.Hour * 24 // defaults to 90 days
cacheMetaJSONFile = "cache.json"
cacheDataFile = "part.1"
cacheDataFilePrefix = "part"
cacheMetaVersion = "1.0.0"
cacheExpiryDays = 90 * time.Hour * 24 // defaults to 90 days
// SSECacheEncrypted is the metadata key indicating that the object
// is a cache entry encrypted with cache KMS master key in globalCacheKMS.
SSECacheEncrypted = "X-Minio-Internal-Encrypted-Cache"
SSECacheEncrypted = "X-Minio-Internal-Encrypted-Cache"
cacheMultipartDir = "multipart"
cacheStaleUploadCleanupInterval = time.Hour * 24
cacheStaleUploadExpiry = time.Hour * 24
)
// CacheChecksumInfoV1 - carries checksums of individual blocks on disk.
@ -78,6 +86,11 @@ type cacheMeta struct {
Hits int `json:"hits,omitempty"`
Bucket string `json:"bucket,omitempty"`
Object string `json:"object,omitempty"`
// for multipart upload
PartNumbers []int `json:"partNums,omitempty"` // Part Numbers
PartETags []string `json:"partETags,omitempty"` // Part ETags
PartSizes []int64 `json:"partSizes,omitempty"` // Part Sizes
PartActualSizes []int64 `json:"partASizes,omitempty"` // Part ActualSizes (compression)
}
// RangeInfo has the range, file and range length information for a cached range.
@ -104,13 +117,13 @@ func (m *cacheMeta) ToObjectInfo(bucket, object string) (o ObjectInfo) {
CacheStatus: CacheHit,
CacheLookupStatus: CacheHit,
}
meta := cloneMSS(m.Meta)
// We set file info only if its valid.
o.Size = m.Stat.Size
o.ETag = extractETag(m.Meta)
o.ContentType = m.Meta["content-type"]
o.ContentEncoding = m.Meta["content-encoding"]
if storageClass, ok := m.Meta[xhttp.AmzStorageClass]; ok {
o.ETag = extractETag(meta)
o.ContentType = meta["content-type"]
o.ContentEncoding = meta["content-encoding"]
if storageClass, ok := meta[xhttp.AmzStorageClass]; ok {
o.StorageClass = storageClass
} else {
o.StorageClass = globalMinioDefaultStorageClass
@ -119,20 +132,26 @@ func (m *cacheMeta) ToObjectInfo(bucket, object string) (o ObjectInfo) {
t time.Time
e error
)
if exp, ok := m.Meta["expires"]; ok {
if exp, ok := meta["expires"]; ok {
if t, e = time.Parse(http.TimeFormat, exp); e == nil {
o.Expires = t.UTC()
}
}
if mtime, ok := m.Meta["last-modified"]; ok {
if mtime, ok := meta["last-modified"]; ok {
if t, e = time.Parse(http.TimeFormat, mtime); e == nil {
o.ModTime = t.UTC()
}
}
o.Parts = make([]ObjectPartInfo, len(m.PartNumbers))
for i := range m.PartNumbers {
o.Parts[i].Number = m.PartNumbers[i]
o.Parts[i].Size = m.PartSizes[i]
o.Parts[i].ETag = m.PartETags[i]
o.Parts[i].ActualSize = m.PartActualSizes[i]
}
// etag/md5Sum has already been extracted. We need to
// remove to avoid it from appearing as part of user-defined metadata
o.UserDefined = cleanMetadata(m.Meta)
o.UserDefined = cleanMetadata(meta)
return o
}
@ -142,16 +161,18 @@ type diskCache struct {
online uint32 // ref: https://golang.org/pkg/sync/atomic/#pkg-note-BUG
purgeRunning int32
triggerGC chan struct{}
dir string // caching directory
stats CacheDiskStats // disk cache stats for prometheus
quotaPct int // max usage in %
pool sync.Pool
after int // minimum accesses before an object is cached.
lowWatermark int
highWatermark int
enableRange bool
commitWriteback bool
triggerGC chan struct{}
dir string // caching directory
stats CacheDiskStats // disk cache stats for prometheus
quotaPct int // max usage in %
pool sync.Pool
after int // minimum accesses before an object is cached.
lowWatermark int
highWatermark int
enableRange bool
commitWriteback bool
commitWritethrough bool
retryWritebackCh chan ObjectInfo
// nsMutex namespace lock
nsMutex *nsLockMap
@ -170,15 +191,17 @@ func newDiskCache(ctx context.Context, dir string, config cache.Config) (*diskCa
return nil, fmt.Errorf("Unable to initialize '%s' dir, %w", dir, err)
}
cache := diskCache{
dir: dir,
triggerGC: make(chan struct{}, 1),
stats: CacheDiskStats{Dir: dir},
quotaPct: quotaPct,
after: config.After,
lowWatermark: config.WatermarkLow,
highWatermark: config.WatermarkHigh,
enableRange: config.Range,
commitWriteback: config.CommitWriteback,
dir: dir,
triggerGC: make(chan struct{}, 1),
stats: CacheDiskStats{Dir: dir},
quotaPct: quotaPct,
after: config.After,
lowWatermark: config.WatermarkLow,
highWatermark: config.WatermarkHigh,
enableRange: config.Range,
commitWriteback: config.CacheCommitMode == CommitWriteBack,
commitWritethrough: config.CacheCommitMode == CommitWriteThrough,
retryWritebackCh: make(chan ObjectInfo, 10000),
online: 1,
pool: sync.Pool{
@ -190,6 +213,7 @@ func newDiskCache(ctx context.Context, dir string, config cache.Config) (*diskCa
nsMutex: newNSLock(false),
}
go cache.purgeWait(ctx)
go cache.cleanupStaleUploads(ctx)
if cache.commitWriteback {
go cache.scanCacheWritebackFailures(ctx)
}
@ -303,16 +327,11 @@ func (c *diskCache) purge(ctx context.Context) {
// ignore error we know what value we are passing.
scorer, _ := newFileScorer(toFree, time.Now().Unix(), 100)
// this function returns FileInfo for cached range files and cache data file.
fiStatFn := func(ranges map[string]string, dataFile, pathPrefix string) map[string]os.FileInfo {
// this function returns FileInfo for cached range files.
fiStatRangesFn := func(ranges map[string]string, pathPrefix string) map[string]os.FileInfo {
fm := make(map[string]os.FileInfo)
fname := pathJoin(pathPrefix, dataFile)
if fi, err := os.Stat(fname); err == nil {
fm[fname] = fi
}
for _, rngFile := range ranges {
fname = pathJoin(pathPrefix, rngFile)
fname := pathJoin(pathPrefix, rngFile)
if fi, err := os.Stat(fname); err == nil {
fm[fname] = fi
}
@ -320,6 +339,26 @@ func (c *diskCache) purge(ctx context.Context) {
return fm
}
// this function returns most recent Atime among cached part files.
lastAtimeFn := func(partNums []int, pathPrefix string) time.Time {
lastATime := timeSentinel
for _, pnum := range partNums {
fname := pathJoin(pathPrefix, fmt.Sprintf("%s.%d", cacheDataFilePrefix, pnum))
if fi, err := os.Stat(fname); err == nil {
if atime.Get(fi).After(lastATime) {
lastATime = atime.Get(fi)
}
}
}
if len(partNums) == 0 {
fname := pathJoin(pathPrefix, cacheDataFile)
if fi, err := os.Stat(fname); err == nil {
lastATime = atime.Get(fi)
}
}
return lastATime
}
filterFn := func(name string, typ os.FileMode) error {
if name == minioMetaBucket {
// Proceed to next file.
@ -334,9 +373,10 @@ func (c *diskCache) purge(ctx context.Context) {
// Proceed to next file.
return nil
}
// stat all cached file ranges and cacheDataFile.
cachedFiles := fiStatFn(meta.Ranges, cacheDataFile, pathJoin(c.dir, name))
// get last access time of cache part files
lastAtime := lastAtimeFn(meta.PartNumbers, pathJoin(c.dir, name))
// stat all cached file ranges.
cachedRngFiles := fiStatRangesFn(meta.Ranges, pathJoin(c.dir, name))
objInfo := meta.ToObjectInfo("", "")
// prevent gc from clearing un-synced commits. This metadata is present when
// cache writeback commit setting is enabled.
@ -345,7 +385,27 @@ func (c *diskCache) purge(ctx context.Context) {
return nil
}
cc := cacheControlOpts(objInfo)
for fname, fi := range cachedFiles {
switch {
case cc != nil:
if cc.isStale(objInfo.ModTime) {
if err = removeAll(cacheDir); err != nil {
logger.LogIf(ctx, err)
}
scorer.adjustSaveBytes(-objInfo.Size)
// break early if sufficient disk space reclaimed.
if c.diskUsageLow() {
// if we found disk usage is already low, we return nil filtering is complete.
return errDoneForNow
}
}
case lastAtime != timeSentinel:
// cached multipart or single part
objInfo.AccTime = lastAtime
objInfo.Name = pathJoin(c.dir, name, cacheDataFile)
scorer.addFileWithObjInfo(objInfo, numHits)
}
for fname, fi := range cachedRngFiles {
if cc != nil {
if cc.isStale(objInfo.ModTime) {
if err = removeAll(fname); err != nil {
@ -365,7 +425,7 @@ func (c *diskCache) purge(ctx context.Context) {
}
// clean up stale cache.json files for objects that never got cached but access count was maintained in cache.json
fi, err := os.Stat(pathJoin(cacheDir, cacheMetaJSONFile))
if err != nil || (fi.ModTime().Before(expiry) && len(cachedFiles) == 0) {
if err != nil || (fi.ModTime().Before(expiry) && len(cachedRngFiles) == 0) {
removeAll(cacheDir)
scorer.adjustSaveBytes(-fi.Size())
// Proceed to next file.
@ -581,8 +641,11 @@ func (c *diskCache) saveMetadata(ctx context.Context, bucket, object string, met
if m.Meta == nil {
m.Meta = make(map[string]string)
}
if etag, ok := meta["etag"]; ok {
m.Meta["etag"] = etag
// save etag in m.Meta if missing
if _, ok := m.Meta["etag"]; !ok {
if etag, ok := meta["etag"]; ok {
m.Meta["etag"] = etag
}
}
}
m.Hits++
@ -591,6 +654,50 @@ func (c *diskCache) saveMetadata(ctx context.Context, bucket, object string, met
return jsonSave(f, m)
}
// updates the ETag and ModTime on cache with ETag from backend
func (c *diskCache) updateMetadata(ctx context.Context, bucket, object, etag string, modTime time.Time, size int64) error {
cachedPath := getCacheSHADir(c.dir, bucket, object)
metaPath := pathJoin(cachedPath, cacheMetaJSONFile)
// Create cache directory if needed
if err := os.MkdirAll(cachedPath, 0777); err != nil {
return err
}
f, err := os.OpenFile(metaPath, os.O_RDWR, 0666)
if err != nil {
return err
}
defer f.Close()
m := &cacheMeta{
Version: cacheMetaVersion,
Bucket: bucket,
Object: object,
}
if err := jsonLoad(f, m); err != nil && err != io.EOF {
return err
}
if m.Meta == nil {
m.Meta = make(map[string]string)
}
var key []byte
var objectEncryptionKey crypto.ObjectKey
if globalCacheKMS != nil {
// Calculating object encryption key
key, err = decryptObjectInfo(key, bucket, object, m.Meta)
if err != nil {
return err
}
copy(objectEncryptionKey[:], key)
m.Meta["etag"] = hex.EncodeToString(objectEncryptionKey.SealETag([]byte(etag)))
} else {
m.Meta["etag"] = etag
}
m.Meta["last-modified"] = modTime.UTC().Format(http.TimeFormat)
m.Meta["Content-Length"] = strconv.Itoa(int(size))
return jsonSave(f, m)
}
func getCacheSHADir(dir, bucket, object string) string {
return pathJoin(dir, getSHA256Hash([]byte(pathJoin(bucket, object))))
}
@ -639,7 +746,7 @@ func (c *diskCache) bitrotWriteToCache(cachePath, fileName string, reader io.Rea
}
hashBytes := h.Sum(nil)
// compute md5Hash of original data stream if writeback commit to cache
if c.commitWriteback {
if c.commitWriteback || c.commitWritethrough {
if _, err = md5Hash.Write((*bufp)[:n]); err != nil {
return 0, "", err
}
@ -655,8 +762,9 @@ func (c *diskCache) bitrotWriteToCache(cachePath, fileName string, reader io.Rea
break
}
}
md5sumCurr := md5Hash.Sum(nil)
return bytesWritten, base64.StdEncoding.EncodeToString(md5Hash.Sum(nil)), nil
return bytesWritten, base64.StdEncoding.EncodeToString(md5sumCurr), nil
}
func newCacheEncryptReader(content io.Reader, bucket, object string, metadata map[string]string) (r io.Reader, err error) {
@ -691,22 +799,36 @@ func newCacheEncryptMetadata(bucket, object string, metadata map[string]string)
metadata[SSECacheEncrypted] = ""
return objectKey[:], nil
}
func (c *diskCache) GetLockContext(ctx context.Context, bucket, object string) (RWLocker, LockContext, error) {
cachePath := getCacheSHADir(c.dir, bucket, object)
cLock := c.NewNSLockFn(cachePath)
lkctx, err := cLock.GetLock(ctx, globalOperationTimeout)
return cLock, lkctx, err
}
// Caches the object to disk
func (c *diskCache) Put(ctx context.Context, bucket, object string, data io.Reader, size int64, rs *HTTPRangeSpec, opts ObjectOptions, incHitsOnly bool) (oi ObjectInfo, err error) {
func (c *diskCache) Put(ctx context.Context, bucket, object string, data io.Reader, size int64, rs *HTTPRangeSpec, opts ObjectOptions, incHitsOnly, writeback bool) (oi ObjectInfo, err error) {
if !c.diskSpaceAvailable(size) {
io.Copy(ioutil.Discard, data)
return oi, errDiskFull
}
cachePath := getCacheSHADir(c.dir, bucket, object)
cLock := c.NewNSLockFn(cachePath)
lkctx, err := cLock.GetLock(ctx, globalOperationTimeout)
cLock, lkctx, err := c.GetLockContext(ctx, bucket, object)
if err != nil {
return oi, err
}
ctx = lkctx.Context()
defer cLock.Unlock(lkctx.Cancel)
return c.put(ctx, bucket, object, data, size, rs, opts, incHitsOnly, writeback)
}
// Caches the object to disk
func (c *diskCache) put(ctx context.Context, bucket, object string, data io.Reader, size int64, rs *HTTPRangeSpec, opts ObjectOptions, incHitsOnly, writeback bool) (oi ObjectInfo, err error) {
if !c.diskSpaceAvailable(size) {
io.Copy(ioutil.Discard, data)
return oi, errDiskFull
}
cachePath := getCacheSHADir(c.dir, bucket, object)
meta, _, numHits, err := c.statCache(ctx, cachePath)
// Case where object not yet cached
if osIsNotExist(err) && c.after >= 1 {
@ -755,7 +877,7 @@ func (c *diskCache) Put(ctx context.Context, bucket, object string, data io.Read
removeAll(cachePath)
return oi, IncompleteBody{Bucket: bucket, Object: object}
}
if c.commitWriteback {
if writeback {
metadata["content-md5"] = md5sum
if md5bytes, err := base64.StdEncoding.DecodeString(md5sum); err == nil {
metadata["etag"] = hex.EncodeToString(md5bytes)
@ -897,7 +1019,7 @@ func (c *diskCache) bitrotReadFromCache(ctx context.Context, filePath string, of
return err
}
if _, err := io.Copy(writer, bytes.NewReader((*bufp)[blockOffset:blockOffset+blockLength])); err != nil {
if _, err = io.Copy(writer, bytes.NewReader((*bufp)[blockOffset:blockOffset+blockLength])); err != nil {
if err != io.ErrClosedPipe {
logger.LogIf(ctx, err)
return err
@ -950,19 +1072,78 @@ func (c *diskCache) Get(ctx context.Context, bucket, object string, rs *HTTPRang
gr, gerr := NewGetObjectReaderFromReader(bytes.NewBuffer(nil), objInfo, opts)
return gr, numHits, gerr
}
fn, off, length, nErr := NewGetObjectReader(rs, objInfo, opts)
fn, startOffset, length, nErr := NewGetObjectReader(rs, objInfo, opts)
if nErr != nil {
return nil, numHits, nErr
}
filePath := pathJoin(cacheObjPath, cacheFile)
var totalBytesRead int64
pr, pw := xioutil.WaitPipe()
go func() {
err := c.bitrotReadFromCache(ctx, filePath, off, length, pw)
if err != nil {
removeAll(cacheObjPath)
if len(objInfo.Parts) > 0 {
// For negative length read everything.
if length < 0 {
length = objInfo.Size - startOffset
}
pw.CloseWithError(err)
}()
// Reply back invalid range if the input offset and length fall out of range.
if startOffset > objInfo.Size || startOffset+length > objInfo.Size {
logger.LogIf(ctx, InvalidRange{startOffset, length, objInfo.Size}, logger.Application)
return nil, numHits, InvalidRange{startOffset, length, objInfo.Size}
}
// Get start part index and offset.
partIndex, partOffset, err := cacheObjectToPartOffset(objInfo, startOffset)
if err != nil {
return nil, numHits, InvalidRange{startOffset, length, objInfo.Size}
}
// Calculate endOffset according to length
endOffset := startOffset
if length > 0 {
endOffset += length - 1
}
// Get last part index to read given length.
lastPartIndex, _, err := cacheObjectToPartOffset(objInfo, endOffset)
if err != nil {
return nil, numHits, InvalidRange{startOffset, length, objInfo.Size}
}
go func() {
for ; partIndex <= lastPartIndex; partIndex++ {
if length == totalBytesRead {
break
}
partNumber := objInfo.Parts[partIndex].Number
// Save the current part name and size.
partSize := objInfo.Parts[partIndex].Size
partLength := partSize - partOffset
// partLength should be adjusted so that we don't write more data than what was requested.
if partLength > (length - totalBytesRead) {
partLength = length - totalBytesRead
}
filePath := pathJoin(cacheObjPath, fmt.Sprintf("part.%d", partNumber))
err := c.bitrotReadFromCache(ctx, filePath, partOffset, partLength, pw)
if err != nil {
removeAll(cacheObjPath)
pw.CloseWithError(err)
break
}
totalBytesRead += partLength
// partOffset will be valid only for the first part, hence reset it to 0 for
// the remaining parts.
partOffset = 0
} // End of read all parts loop.
pw.CloseWithError(err)
}()
} else {
go func() {
filePath := pathJoin(cacheObjPath, cacheFile)
err := c.bitrotReadFromCache(ctx, filePath, startOffset, length, pw)
if err != nil {
removeAll(cacheObjPath)
}
pw.CloseWithError(err)
}()
}
// Cleanup function to cause the go routine above to exit, in
// case of incomplete read.
pipeCloser := func() { pr.CloseWithError(nil) }
@ -980,11 +1161,17 @@ func (c *diskCache) Get(ctx context.Context, bucket, object string, rs *HTTPRang
gr.ObjInfo.Size = objSize
}
return gr, numHits, nil
}
// deletes the cached object - caller should have taken write lock
func (c *diskCache) delete(bucket, object string) (err error) {
cacheObjPath := getCacheSHADir(c.dir, bucket, object)
return removeAll(cacheObjPath)
}
// Deletes the cached object
func (c *diskCache) delete(ctx context.Context, cacheObjPath string) (err error) {
func (c *diskCache) Delete(ctx context.Context, bucket, object string) (err error) {
cacheObjPath := getCacheSHADir(c.dir, bucket, object)
cLock := c.NewNSLockFn(cacheObjPath)
lkctx, err := cLock.GetLock(ctx, globalOperationTimeout)
if err != nil {
@ -994,12 +1181,6 @@ func (c *diskCache) delete(ctx context.Context, cacheObjPath string) (err error)
return removeAll(cacheObjPath)
}
// Deletes the cached object
func (c *diskCache) Delete(ctx context.Context, bucket, object string) (err error) {
cacheObjPath := getCacheSHADir(c.dir, bucket, object)
return c.delete(ctx, cacheObjPath)
}
// convenience function to check if object is cached on this diskCache
func (c *diskCache) Exists(ctx context.Context, bucket, object string) bool {
if _, err := os.Stat(getCacheSHADir(c.dir, bucket, object)); err != nil {
@ -1040,3 +1221,394 @@ func (c *diskCache) scanCacheWritebackFailures(ctx context.Context) {
return
}
}
// NewMultipartUpload caches multipart uploads when writethrough is MINIO_CACHE_COMMIT mode
// multiparts are saved in .minio.sys/multipart/cachePath/uploadID dir until finalized. Then the individual parts
// are moved from the upload dir to cachePath/ directory.
func (c *diskCache) NewMultipartUpload(ctx context.Context, bucket, object, uID string, opts ObjectOptions) (uploadID string, err error) {
uploadID = uID
if uploadID == "" {
return "", InvalidUploadID{
Bucket: bucket,
Object: object,
UploadID: uploadID,
}
}
cachePath := getMultipartCacheSHADir(c.dir, bucket, object)
uploadIDDir := path.Join(cachePath, uploadID)
if err := os.MkdirAll(uploadIDDir, 0777); err != nil {
return uploadID, err
}
metaPath := pathJoin(uploadIDDir, cacheMetaJSONFile)
f, err := os.OpenFile(metaPath, os.O_RDWR|os.O_CREATE, 0666)
if err != nil {
return uploadID, err
}
defer f.Close()
m := &cacheMeta{
Version: cacheMetaVersion,
Bucket: bucket,
Object: object,
}
if err := jsonLoad(f, m); err != nil && err != io.EOF {
return uploadID, err
}
m.Meta = opts.UserDefined
m.Checksum = CacheChecksumInfoV1{Algorithm: HighwayHash256S.String(), Blocksize: cacheBlkSize}
m.Stat.ModTime = UTCNow()
if globalCacheKMS != nil {
m.Meta[ReservedMetadataPrefix+"Encrypted-Multipart"] = ""
if _, err := newCacheEncryptMetadata(bucket, object, m.Meta); err != nil {
return uploadID, err
}
}
err = jsonSave(f, m)
return uploadID, err
}
// PutObjectPart caches part to cache multipart path.
func (c *diskCache) PutObjectPart(ctx context.Context, bucket, object, uploadID string, partID int, data io.Reader, size int64, opts ObjectOptions) (partInfo PartInfo, err error) {
oi := PartInfo{}
if !c.diskSpaceAvailable(size) {
io.Copy(ioutil.Discard, data)
return oi, errDiskFull
}
cachePath := getMultipartCacheSHADir(c.dir, bucket, object)
uploadIDDir := path.Join(cachePath, uploadID)
partIDLock := c.NewNSLockFn(pathJoin(uploadIDDir, strconv.Itoa(partID)))
lkctx, err := partIDLock.GetLock(ctx, globalOperationTimeout)
if err != nil {
return oi, err
}
ctx = lkctx.Context()
defer partIDLock.Unlock(lkctx.Cancel)
meta, _, _, err := c.statCache(ctx, uploadIDDir)
// Case where object not yet cached
if err != nil {
return oi, err
}
if !c.diskSpaceAvailable(size) {
return oi, errDiskFull
}
reader := data
var actualSize = uint64(size)
if globalCacheKMS != nil {
reader, err = newCachePartEncryptReader(ctx, bucket, object, partID, data, size, meta.Meta)
if err != nil {
return oi, err
}
actualSize, _ = sio.EncryptedSize(uint64(size))
}
n, md5sum, err := c.bitrotWriteToCache(uploadIDDir, fmt.Sprintf("part.%d", partID), reader, actualSize)
if IsErr(err, baseErrs...) {
// take the cache drive offline
c.setOffline()
}
if err != nil {
return oi, err
}
if actualSize != uint64(n) {
return oi, IncompleteBody{Bucket: bucket, Object: object}
}
var md5hex string
if md5bytes, err := base64.StdEncoding.DecodeString(md5sum); err == nil {
md5hex = hex.EncodeToString(md5bytes)
}
pInfo := PartInfo{
PartNumber: partID,
ETag: md5hex,
Size: n,
ActualSize: int64(actualSize),
LastModified: UTCNow(),
}
return pInfo, nil
}
// SavePartMetadata saves part upload metadata to uploadID directory on disk cache
func (c *diskCache) SavePartMetadata(ctx context.Context, bucket, object, uploadID string, partID int, pinfo PartInfo) error {
cachePath := getMultipartCacheSHADir(c.dir, bucket, object)
uploadDir := path.Join(cachePath, uploadID)
// acquire a write lock at upload path to update cache.json
uploadLock := c.NewNSLockFn(uploadDir)
ulkctx, err := uploadLock.GetLock(ctx, globalOperationTimeout)
if err != nil {
return err
}
defer uploadLock.Unlock(ulkctx.Cancel)
metaPath := pathJoin(uploadDir, cacheMetaJSONFile)
f, err := os.OpenFile(metaPath, os.O_RDWR, 0666)
if err != nil {
return err
}
defer f.Close()
m := &cacheMeta{}
if err := jsonLoad(f, m); err != nil && err != io.EOF {
return err
}
var key []byte
var objectEncryptionKey crypto.ObjectKey
if globalCacheKMS != nil {
// Calculating object encryption key
key, err = decryptObjectInfo(key, bucket, object, m.Meta)
if err != nil {
return err
}
copy(objectEncryptionKey[:], key)
pinfo.ETag = hex.EncodeToString(objectEncryptionKey.SealETag([]byte(pinfo.ETag)))
}
pIdx := cacheObjPartIndex(m, partID)
if pIdx == -1 {
m.PartActualSizes = append(m.PartActualSizes, pinfo.ActualSize)
m.PartNumbers = append(m.PartNumbers, pinfo.PartNumber)
m.PartETags = append(m.PartETags, pinfo.ETag)
m.PartSizes = append(m.PartSizes, pinfo.Size)
} else {
m.PartActualSizes[pIdx] = pinfo.ActualSize
m.PartNumbers[pIdx] = pinfo.PartNumber
m.PartETags[pIdx] = pinfo.ETag
m.PartSizes[pIdx] = pinfo.Size
}
return jsonSave(f, m)
}
// newCachePartEncryptReader returns encrypted cache part reader, with part data encrypted with part encryption key
func newCachePartEncryptReader(ctx context.Context, bucket, object string, partID int, content io.Reader, size int64, metadata map[string]string) (r io.Reader, err error) {
var key []byte
var objectEncryptionKey, partEncryptionKey crypto.ObjectKey
// Calculating object encryption key
key, err = decryptObjectInfo(key, bucket, object, metadata)
if err != nil {
return nil, err
}
copy(objectEncryptionKey[:], key)
partEnckey := objectEncryptionKey.DerivePartKey(uint32(partID))
copy(partEncryptionKey[:], partEnckey[:])
wantSize := int64(-1)
if size >= 0 {
info := ObjectInfo{Size: size}
wantSize = info.EncryptedSize()
}
hReader, err := hash.NewReader(content, wantSize, "", "", size)
if err != nil {
return nil, err
}
pReader := NewPutObjReader(hReader)
content, err = pReader.WithEncryption(hReader, &partEncryptionKey)
if err != nil {
return nil, err
}
reader, err := sio.EncryptReader(content, sio.Config{Key: partEncryptionKey[:], MinVersion: sio.Version20, CipherSuites: fips.CipherSuitesDARE()})
if err != nil {
return nil, crypto.ErrInvalidCustomerKey
}
return reader, nil
}
// uploadIDExists returns error if uploadID is not being cached.
func (c *diskCache) uploadIDExists(bucket, object, uploadID string) (err error) {
mpartCachePath := getMultipartCacheSHADir(c.dir, bucket, object)
uploadIDDir := path.Join(mpartCachePath, uploadID)
if _, err := os.Stat(uploadIDDir); err != nil {
return err
}
return nil
}
// CompleteMultipartUpload completes multipart upload on cache. The parts and cache.json are moved from the temporary location in
// .minio.sys/multipart/cacheSHA/.. to cacheSHA path after part verification succeeds.
func (c *diskCache) CompleteMultipartUpload(ctx context.Context, bucket, object, uploadID string, uploadedParts []CompletePart, roi ObjectInfo, opts ObjectOptions) (oi ObjectInfo, err error) {
cachePath := getCacheSHADir(c.dir, bucket, object)
cLock := c.NewNSLockFn(cachePath)
lkctx, err := cLock.GetLock(ctx, globalOperationTimeout)
if err != nil {
return oi, err
}
ctx = lkctx.Context()
defer cLock.Unlock(lkctx.Cancel)
mpartCachePath := getMultipartCacheSHADir(c.dir, bucket, object)
uploadIDDir := path.Join(mpartCachePath, uploadID)
uploadMeta, _, _, uerr := c.statCache(ctx, uploadIDDir)
if uerr != nil {
return oi, errUploadIDNotFound
}
// Case where object not yet cached
// Calculate full object size.
var objectSize int64
// Calculate consolidated actual size.
var objectActualSize int64
var partETags []string
partETags, err = decryptCachePartETags(uploadMeta)
if err != nil {
return oi, err
}
for i, pi := range uploadedParts {
pIdx := cacheObjPartIndex(uploadMeta, pi.PartNumber)
if pIdx == -1 {
invp := InvalidPart{
PartNumber: pi.PartNumber,
GotETag: pi.ETag,
}
return oi, invp
}
pi.ETag = canonicalizeETag(pi.ETag)
if partETags[pIdx] != pi.ETag {
invp := InvalidPart{
PartNumber: pi.PartNumber,
ExpETag: partETags[pIdx],
GotETag: pi.ETag,
}
return oi, invp
}
// All parts except the last part has to be atleast 5MB.
if (i < len(uploadedParts)-1) && !isMinAllowedPartSize(uploadMeta.PartActualSizes[pIdx]) {
return oi, PartTooSmall{
PartNumber: pi.PartNumber,
PartSize: uploadMeta.PartActualSizes[pIdx],
PartETag: pi.ETag,
}
}
// Save for total object size.
objectSize += uploadMeta.PartSizes[pIdx]
// Save the consolidated actual size.
objectActualSize += uploadMeta.PartActualSizes[pIdx]
}
uploadMeta.Stat.Size = objectSize
uploadMeta.Stat.ModTime = roi.ModTime
// if encrypted - make sure ETag updated
uploadMeta.Meta["etag"] = roi.ETag
uploadMeta.Meta[ReservedMetadataPrefix+"actual-size"] = strconv.FormatInt(objectActualSize, 10)
var cpartETags []string
var cpartNums []int
var cpartSizes, cpartActualSizes []int64
for _, pi := range uploadedParts {
pIdx := cacheObjPartIndex(uploadMeta, pi.PartNumber)
if pIdx != -1 {
cpartETags = append(cpartETags, uploadMeta.PartETags[pIdx])
cpartNums = append(cpartNums, uploadMeta.PartNumbers[pIdx])
cpartSizes = append(cpartSizes, uploadMeta.PartSizes[pIdx])
cpartActualSizes = append(cpartActualSizes, uploadMeta.PartActualSizes[pIdx])
}
}
uploadMeta.PartETags = cpartETags
uploadMeta.PartSizes = cpartSizes
uploadMeta.PartActualSizes = cpartActualSizes
uploadMeta.PartNumbers = cpartNums
uploadMeta.Hits++
metaPath := pathJoin(uploadIDDir, cacheMetaJSONFile)
f, err := os.OpenFile(metaPath, os.O_RDWR|os.O_CREATE, 0666)
if err != nil {
return oi, err
}
defer f.Close()
jsonSave(f, uploadMeta)
for _, pi := range uploadedParts {
part := fmt.Sprintf("part.%d", pi.PartNumber)
renameAll(pathJoin(uploadIDDir, part), pathJoin(cachePath, part))
}
renameAll(pathJoin(uploadIDDir, cacheMetaJSONFile), pathJoin(cachePath, cacheMetaJSONFile))
removeAll(uploadIDDir) // clean up any unused parts in the uploadIDDir
return uploadMeta.ToObjectInfo(bucket, object), nil
}
func (c *diskCache) AbortUpload(bucket, object, uploadID string) (err error) {
mpartCachePath := getMultipartCacheSHADir(c.dir, bucket, object)
uploadDir := path.Join(mpartCachePath, uploadID)
return removeAll(uploadDir)
}
// cacheObjPartIndex - returns the index of matching object part number.
func cacheObjPartIndex(m *cacheMeta, partNumber int) int {
for i, part := range m.PartNumbers {
if partNumber == part {
return i
}
}
return -1
}
// cacheObjectToPartOffset calculates part index and part offset for requested offset for content on cache.
func cacheObjectToPartOffset(objInfo ObjectInfo, offset int64) (partIndex int, partOffset int64, err error) {
if offset == 0 {
// Special case - if offset is 0, then partIndex and partOffset are always 0.
return 0, 0, nil
}
partOffset = offset
// Seek until object offset maps to a particular part offset.
for i, part := range objInfo.Parts {
partIndex = i
// Offset is smaller than size we have reached the proper part offset.
if partOffset < part.Size {
return partIndex, partOffset, nil
}
// Continue to towards the next part.
partOffset -= part.Size
}
// Offset beyond the size of the object return InvalidRange.
return 0, 0, InvalidRange{}
}
// get path of on-going multipart caching
func getMultipartCacheSHADir(dir, bucket, object string) string {
return pathJoin(dir, minioMetaBucket, cacheMultipartDir, getSHA256Hash([]byte(pathJoin(bucket, object))))
}
// clean up stale cache multipart uploads according to cleanup interval.
func (c *diskCache) cleanupStaleUploads(ctx context.Context) {
if !c.commitWritethrough {
return
}
timer := time.NewTimer(cacheStaleUploadCleanupInterval)
defer timer.Stop()
for {
select {
case <-ctx.Done():
return
case <-timer.C:
// Reset for the next interval
timer.Reset(cacheStaleUploadCleanupInterval)
now := time.Now()
readDirFn(pathJoin(c.dir, minioMetaBucket, cacheMultipartDir), func(shaDir string, typ os.FileMode) error {
return readDirFn(pathJoin(c.dir, minioMetaBucket, cacheMultipartDir, shaDir), func(uploadIDDir string, typ os.FileMode) error {
uploadIDPath := pathJoin(c.dir, minioMetaBucket, cacheMultipartDir, shaDir, uploadIDDir)
fi, err := os.Stat(uploadIDPath)
if err != nil {
return nil
}
if now.Sub(fi.ModTime()) > cacheStaleUploadExpiry {
removeAll(uploadIDPath)
}
return nil
})
})
}
}
}

View file

@ -110,7 +110,7 @@ func cacheControlOpts(o ObjectInfo) *cacheControl {
var headerVal string
for k, v := range m {
if strings.ToLower(k) == "cache-control" {
if strings.EqualFold(k, "cache-control") {
headerVal = v
}
@ -246,6 +246,9 @@ func decryptCacheObjectETag(info *ObjectInfo) error {
if globalCacheKMS == nil {
return errKMSNotConfigured
}
if len(info.Parts) > 0 { // multipart ETag is not encrypted since it is not md5sum
return nil
}
keyID, kmsKey, sealedKey, err := crypto.S3.ParseMetadata(info.UserDefined)
if err != nil {
return err
@ -271,6 +274,43 @@ func decryptCacheObjectETag(info *ObjectInfo) error {
return nil
}
// decryptCacheObjectETag tries to decrypt the ETag saved in encrypted format using the cache KMS
func decryptCachePartETags(c *cacheMeta) ([]string, error) {
var partETags []string
encrypted := crypto.S3.IsEncrypted(c.Meta) && isCacheEncrypted(c.Meta)
switch {
case encrypted:
if globalCacheKMS == nil {
return partETags, errKMSNotConfigured
}
keyID, kmsKey, sealedKey, err := crypto.S3.ParseMetadata(c.Meta)
if err != nil {
return partETags, err
}
extKey, err := globalCacheKMS.DecryptKey(keyID, kmsKey, kms.Context{c.Bucket: path.Join(c.Bucket, c.Object)})
if err != nil {
return partETags, err
}
var objectKey crypto.ObjectKey
if err = objectKey.Unseal(extKey, sealedKey, crypto.S3.String(), c.Bucket, c.Object); err != nil {
return partETags, err
}
for i := range c.PartETags {
etagStr := tryDecryptETag(objectKey[:], c.PartETags[i], false)
// backend ETag was hex encoded before encrypting, so hex decode to get actual ETag
etag, err := hex.DecodeString(etagStr)
if err != nil {
return []string{}, err
}
partETags = append(partETags, string(etag))
}
return partETags, nil
default:
return c.PartETags, nil
}
}
func isMetadataSame(m1, m2 map[string]string) bool {
if m1 == nil && m2 == nil {
return true
@ -506,3 +546,38 @@ func bytesToClear(total, free int64, quotaPct, lowWatermark, highWatermark uint6
lowWMUsage := total * (int64)(lowWatermark*quotaPct) / (100 * 100)
return (uint64)(math.Min(float64(quotaAllowed), math.Max(0.0, float64(used-lowWMUsage))))
}
type multiWriter struct {
backendWriter io.Writer
cacheWriter *io.PipeWriter
pipeClosed bool
}
// multiWriter writes to backend and cache - if cache write
// fails close the pipe, but continue writing to the backend
func (t *multiWriter) Write(p []byte) (n int, err error) {
n, err = t.backendWriter.Write(p)
if err == nil && n != len(p) {
err = io.ErrShortWrite
return
}
if err != nil {
if !t.pipeClosed {
t.cacheWriter.CloseWithError(err)
}
return
}
// ignore errors writing to cache
if !t.pipeClosed {
_, cerr := t.cacheWriter.Write(p)
if cerr != nil {
t.pipeClosed = true
t.cacheWriter.CloseWithError(cerr)
}
}
return len(p), nil
}
func cacheMultiWriter(w1 io.Writer, w2 *io.PipeWriter) io.Writer {
return &multiWriter{backendWriter: w1, cacheWriter: w2}
}

View file

@ -59,6 +59,13 @@ const (
CommitFailed cacheCommitStatus = "failed"
)
const (
// CommitWriteBack allows staging and write back of cached content for single object uploads
CommitWriteBack string = "writeback"
// CommitWriteThrough allows caching multipart uploads to disk synchronously
CommitWriteThrough string = "writethrough"
)
// String returns string representation of status
func (s cacheCommitStatus) String() string {
return string(s)
@ -80,6 +87,13 @@ type CacheObjectLayer interface {
DeleteObjects(ctx context.Context, bucket string, objects []ObjectToDelete, opts ObjectOptions) ([]DeletedObject, []error)
PutObject(ctx context.Context, bucket, object string, data *PutObjReader, opts ObjectOptions) (objInfo ObjectInfo, err error)
CopyObject(ctx context.Context, srcBucket, srcObject, destBucket, destObject string, srcInfo ObjectInfo, srcOpts, dstOpts ObjectOptions) (objInfo ObjectInfo, err error)
// Multipart operations.
NewMultipartUpload(ctx context.Context, bucket, object string, opts ObjectOptions) (uploadID string, err error)
PutObjectPart(ctx context.Context, bucket, object, uploadID string, partID int, data *PutObjReader, opts ObjectOptions) (info PartInfo, err error)
AbortMultipartUpload(ctx context.Context, bucket, object, uploadID string, opts ObjectOptions) error
CompleteMultipartUpload(ctx context.Context, bucket, object, uploadID string, uploadedParts []CompletePart, opts ObjectOptions) (objInfo ObjectInfo, err error)
CopyObjectPart(ctx context.Context, srcBucket, srcObject, dstBucket, dstObject, uploadID string, partID int, startOffset int64, length int64, srcInfo ObjectInfo, srcOpts, dstOpts ObjectOptions) (pi PartInfo, e error)
// Storage operations.
StorageInfo(ctx context.Context) CacheStorageInfo
CacheStats() *CacheStats
@ -94,21 +108,26 @@ type cacheObjects struct {
// number of accesses after which to cache an object
after int
// commit objects in async manner
commitWriteback bool
commitWriteback bool
commitWritethrough bool
// if true migration is in progress from v1 to v2
migrating bool
// mutex to protect migration bool
migMutex sync.Mutex
// retry queue for writeback cache mode to reattempt upload to backend
wbRetryCh chan ObjectInfo
// Cache stats
cacheStats *CacheStats
InnerGetObjectNInfoFn func(ctx context.Context, bucket, object string, rs *HTTPRangeSpec, h http.Header, lockType LockType, opts ObjectOptions) (gr *GetObjectReader, err error)
InnerGetObjectInfoFn func(ctx context.Context, bucket, object string, opts ObjectOptions) (objInfo ObjectInfo, err error)
InnerDeleteObjectFn func(ctx context.Context, bucket, object string, opts ObjectOptions) (objInfo ObjectInfo, err error)
InnerPutObjectFn func(ctx context.Context, bucket, object string, data *PutObjReader, opts ObjectOptions) (objInfo ObjectInfo, err error)
InnerCopyObjectFn func(ctx context.Context, srcBucket, srcObject, destBucket, destObject string, srcInfo ObjectInfo, srcOpts, dstOpts ObjectOptions) (objInfo ObjectInfo, err error)
InnerGetObjectNInfoFn func(ctx context.Context, bucket, object string, rs *HTTPRangeSpec, h http.Header, lockType LockType, opts ObjectOptions) (gr *GetObjectReader, err error)
InnerGetObjectInfoFn func(ctx context.Context, bucket, object string, opts ObjectOptions) (objInfo ObjectInfo, err error)
InnerDeleteObjectFn func(ctx context.Context, bucket, object string, opts ObjectOptions) (objInfo ObjectInfo, err error)
InnerPutObjectFn func(ctx context.Context, bucket, object string, data *PutObjReader, opts ObjectOptions) (objInfo ObjectInfo, err error)
InnerCopyObjectFn func(ctx context.Context, srcBucket, srcObject, destBucket, destObject string, srcInfo ObjectInfo, srcOpts, dstOpts ObjectOptions) (objInfo ObjectInfo, err error)
InnerNewMultipartUploadFn func(ctx context.Context, bucket, object string, opts ObjectOptions) (uploadID string, err error)
InnerPutObjectPartFn func(ctx context.Context, bucket, object, uploadID string, partID int, data *PutObjReader, opts ObjectOptions) (info PartInfo, err error)
InnerAbortMultipartUploadFn func(ctx context.Context, bucket, object, uploadID string, opts ObjectOptions) error
InnerCompleteMultipartUploadFn func(ctx context.Context, bucket, object, uploadID string, uploadedParts []CompletePart, opts ObjectOptions) (objInfo ObjectInfo, err error)
InnerCopyObjectPartFn func(ctx context.Context, srcBucket, srcObject, dstBucket, dstObject, uploadID string, partID int, startOffset int64, length int64, srcInfo ObjectInfo, srcOpts, dstOpts ObjectOptions) (pi PartInfo, e error)
}
func (c *cacheObjects) incHitsToMeta(ctx context.Context, dcache *diskCache, bucket, object string, size int64, eTag string, rs *HTTPRangeSpec) error {
@ -349,7 +368,7 @@ func (c *cacheObjects) GetObjectNInfo(ctx context.Context, bucket, object string
// use a new context to avoid locker prematurely timing out operation when the GetObjectNInfo returns.
dcache.Put(GlobalContext, bucket, object, bReader, bReader.ObjInfo.Size, rs, ObjectOptions{
UserDefined: getMetadata(bReader.ObjInfo),
}, false)
}, false, false)
return
}
}()
@ -367,7 +386,7 @@ func (c *cacheObjects) GetObjectNInfo(ctx context.Context, bucket, object string
io.LimitReader(pr, bkReader.ObjInfo.Size),
bkReader.ObjInfo.Size, rs, ObjectOptions{
UserDefined: userDefined,
}, false)
}, false, false)
// close the read end of the pipe, so the error gets
// propagated to teeReader
pr.CloseWithError(putErr)
@ -488,8 +507,6 @@ func (c *cacheObjects) CacheStats() (cs *CacheStats) {
// skipCache() returns true if cache migration is in progress
func (c *cacheObjects) skipCache() bool {
c.migMutex.Lock()
defer c.migMutex.Unlock()
return c.migrating
}
@ -619,8 +636,6 @@ func (c *cacheObjects) migrateCacheFromV1toV2(ctx context.Context) {
}
// update migration status
c.migMutex.Lock()
defer c.migMutex.Unlock()
c.migrating = false
logStartupMessage(color.Blue("Cache migration completed successfully."))
}
@ -663,31 +678,82 @@ func (c *cacheObjects) PutObject(ctx context.Context, bucket, object string, r *
return putObjectFn(ctx, bucket, object, r, opts)
}
if c.commitWriteback {
oi, err := dcache.Put(ctx, bucket, object, r, r.Size(), nil, opts, false)
oi, err := dcache.Put(ctx, bucket, object, r, r.Size(), nil, opts, false, true)
if err != nil {
return ObjectInfo{}, err
}
go c.uploadObject(GlobalContext, oi)
return oi, nil
}
objInfo, err = putObjectFn(ctx, bucket, object, r, opts)
if err == nil {
go func() {
// fill cache in the background
bReader, bErr := c.InnerGetObjectNInfoFn(GlobalContext, bucket, object, nil, http.Header{}, readLock, ObjectOptions{})
if bErr != nil {
return
}
defer bReader.Close()
oi, _, err := dcache.Stat(GlobalContext, bucket, object)
// avoid cache overwrite if another background routine filled cache
if err != nil || oi.ETag != bReader.ObjInfo.ETag {
dcache.Put(GlobalContext, bucket, object, bReader, bReader.ObjInfo.Size, nil, ObjectOptions{UserDefined: getMetadata(bReader.ObjInfo)}, false)
}
}()
if !c.commitWritethrough {
objInfo, err = putObjectFn(ctx, bucket, object, r, opts)
if err == nil {
go func() {
// fill cache in the background
bReader, bErr := c.InnerGetObjectNInfoFn(GlobalContext, bucket, object, nil, http.Header{}, readLock, ObjectOptions{})
if bErr != nil {
return
}
defer bReader.Close()
oi, _, err := dcache.Stat(GlobalContext, bucket, object)
// avoid cache overwrite if another background routine filled cache
if err != nil || oi.ETag != bReader.ObjInfo.ETag {
dcache.Put(GlobalContext, bucket, object, bReader, bReader.ObjInfo.Size, nil, ObjectOptions{UserDefined: getMetadata(bReader.ObjInfo)}, false, true)
}
}()
}
return objInfo, err
}
return objInfo, err
cLock, lkctx, cerr := dcache.GetLockContext(GlobalContext, bucket, object)
if cerr != nil {
return putObjectFn(ctx, bucket, object, r, opts)
}
defer cLock.Unlock(lkctx.Cancel)
// Initialize pipe to stream data to backend
pipeReader, pipeWriter := io.Pipe()
hashReader, err := hash.NewReader(pipeReader, size, "", "", r.ActualSize())
if err != nil {
return
}
// Initialize pipe to stream data to cache
rPipe, wPipe := io.Pipe()
infoCh := make(chan ObjectInfo)
errorCh := make(chan error)
go func() {
info, err := putObjectFn(ctx, bucket, object, NewPutObjReader(hashReader), opts)
if err != nil {
close(infoCh)
pipeReader.CloseWithError(err)
rPipe.CloseWithError(err)
errorCh <- err
return
}
close(errorCh)
infoCh <- info
}()
go func() {
_, err := dcache.put(lkctx.Context(), bucket, object, rPipe, r.Size(), nil, opts, false, false)
if err != nil {
rPipe.CloseWithError(err)
return
}
}()
mwriter := cacheMultiWriter(pipeWriter, wPipe)
_, err = io.Copy(mwriter, r)
pipeWriter.Close()
wPipe.Close()
if err != nil {
err = <-errorCh
return ObjectInfo{}, err
}
info := <-infoCh
if cerr = dcache.updateMetadata(lkctx.Context(), bucket, object, info.ETag, info.ModTime, info.Size); cerr != nil {
dcache.delete(bucket, object)
}
return info, err
}
// upload cached object to backend in async commit mode.
@ -759,13 +825,14 @@ func newServerCacheObjects(ctx context.Context, config cache.Config) (CacheObjec
return nil, err
}
c := &cacheObjects{
cache: cache,
exclude: config.Exclude,
after: config.After,
migrating: migrateSw,
migMutex: sync.Mutex{},
commitWriteback: config.CommitWriteback,
cacheStats: newCacheStats(),
cache: cache,
exclude: config.Exclude,
after: config.After,
migrating: migrateSw,
commitWriteback: config.CacheCommitMode == CommitWriteBack,
commitWritethrough: config.CacheCommitMode == CommitWriteThrough,
cacheStats: newCacheStats(),
InnerGetObjectInfoFn: func(ctx context.Context, bucket, object string, opts ObjectOptions) (ObjectInfo, error) {
return newObjectLayerFn().GetObjectInfo(ctx, bucket, object, opts)
},
@ -781,6 +848,21 @@ func newServerCacheObjects(ctx context.Context, config cache.Config) (CacheObjec
InnerCopyObjectFn: func(ctx context.Context, srcBucket, srcObject, destBucket, destObject string, srcInfo ObjectInfo, srcOpts, dstOpts ObjectOptions) (objInfo ObjectInfo, err error) {
return newObjectLayerFn().CopyObject(ctx, srcBucket, srcObject, destBucket, destObject, srcInfo, srcOpts, dstOpts)
},
InnerNewMultipartUploadFn: func(ctx context.Context, bucket, object string, opts ObjectOptions) (uploadID string, err error) {
return newObjectLayerFn().NewMultipartUpload(ctx, bucket, object, opts)
},
InnerPutObjectPartFn: func(ctx context.Context, bucket, object, uploadID string, partID int, data *PutObjReader, opts ObjectOptions) (info PartInfo, err error) {
return newObjectLayerFn().PutObjectPart(ctx, bucket, object, uploadID, partID, data, opts)
},
InnerAbortMultipartUploadFn: func(ctx context.Context, bucket, object, uploadID string, opts ObjectOptions) error {
return newObjectLayerFn().AbortMultipartUpload(ctx, bucket, object, uploadID, opts)
},
InnerCompleteMultipartUploadFn: func(ctx context.Context, bucket, object, uploadID string, uploadedParts []CompletePart, opts ObjectOptions) (objInfo ObjectInfo, err error) {
return newObjectLayerFn().CompleteMultipartUpload(ctx, bucket, object, uploadID, uploadedParts, opts)
},
InnerCopyObjectPartFn: func(ctx context.Context, srcBucket, srcObject, dstBucket, dstObject, uploadID string, partID int, startOffset int64, length int64, srcInfo ObjectInfo, srcOpts, dstOpts ObjectOptions) (pi PartInfo, e error) {
return newObjectLayerFn().CopyObjectPart(ctx, srcBucket, srcObject, dstBucket, dstObject, uploadID, partID, startOffset, length, srcInfo, srcOpts, dstOpts)
},
}
c.cacheStats.GetDiskStats = func() []CacheDiskStats {
cacheDiskStats := make([]CacheDiskStats, len(c.cache))
@ -859,3 +941,247 @@ func (c *cacheObjects) queuePendingWriteback(ctx context.Context) {
}
}
}
// NewMultipartUpload - Starts a new multipart upload operation to backend - if writethrough mode is enabled, starts caching the multipart.
func (c *cacheObjects) NewMultipartUpload(ctx context.Context, bucket, object string, opts ObjectOptions) (uploadID string, err error) {
newMultipartUploadFn := c.InnerNewMultipartUploadFn
dcache, err := c.getCacheToLoc(ctx, bucket, object)
if err != nil {
// disk cache could not be located,execute backend call.
return newMultipartUploadFn(ctx, bucket, object, opts)
}
if c.skipCache() {
return newMultipartUploadFn(ctx, bucket, object, opts)
}
if opts.ServerSideEncryption != nil { // avoid caching encrypted objects
dcache.Delete(ctx, bucket, object)
return newMultipartUploadFn(ctx, bucket, object, opts)
}
// skip cache for objects with locks
objRetention := objectlock.GetObjectRetentionMeta(opts.UserDefined)
legalHold := objectlock.GetObjectLegalHoldMeta(opts.UserDefined)
if objRetention.Mode.Valid() || legalHold.Status.Valid() {
dcache.Delete(ctx, bucket, object)
return newMultipartUploadFn(ctx, bucket, object, opts)
}
// fetch from backend if cache exclude pattern or cache-control
// directive set to exclude
if c.isCacheExclude(bucket, object) {
dcache.Delete(ctx, bucket, object)
return newMultipartUploadFn(ctx, bucket, object, opts)
}
if !c.commitWritethrough && !c.commitWriteback {
return newMultipartUploadFn(ctx, bucket, object, opts)
}
// perform multipart upload on backend and cache simultaneously
uploadID, err = newMultipartUploadFn(ctx, bucket, object, opts)
dcache.NewMultipartUpload(GlobalContext, bucket, object, uploadID, opts)
return uploadID, err
}
// PutObjectPart streams part to cache concurrently if writethrough mode is enabled. Otherwise redirects the call to remote
func (c *cacheObjects) PutObjectPart(ctx context.Context, bucket, object, uploadID string, partID int, data *PutObjReader, opts ObjectOptions) (info PartInfo, err error) {
putObjectPartFn := c.InnerPutObjectPartFn
dcache, err := c.getCacheToLoc(ctx, bucket, object)
if err != nil {
// disk cache could not be located,execute backend call.
return putObjectPartFn(ctx, bucket, object, uploadID, partID, data, opts)
}
if !c.commitWritethrough && !c.commitWriteback {
return putObjectPartFn(ctx, bucket, object, uploadID, partID, data, opts)
}
if c.skipCache() {
return putObjectPartFn(ctx, bucket, object, uploadID, partID, data, opts)
}
size := data.Size()
// avoid caching part if space unavailable
if !dcache.diskSpaceAvailable(size) {
return putObjectPartFn(ctx, bucket, object, uploadID, partID, data, opts)
}
if opts.ServerSideEncryption != nil {
dcache.Delete(ctx, bucket, object)
return putObjectPartFn(ctx, bucket, object, uploadID, partID, data, opts)
}
// skip cache for objects with locks
objRetention := objectlock.GetObjectRetentionMeta(opts.UserDefined)
legalHold := objectlock.GetObjectLegalHoldMeta(opts.UserDefined)
if objRetention.Mode.Valid() || legalHold.Status.Valid() {
dcache.Delete(ctx, bucket, object)
return putObjectPartFn(ctx, bucket, object, uploadID, partID, data, opts)
}
// fetch from backend if cache exclude pattern or cache-control
// directive set to exclude
if c.isCacheExclude(bucket, object) {
dcache.Delete(ctx, bucket, object)
return putObjectPartFn(ctx, bucket, object, uploadID, partID, data, opts)
}
info = PartInfo{}
// Initialize pipe to stream data to backend
pipeReader, pipeWriter := io.Pipe()
hashReader, err := hash.NewReader(pipeReader, size, "", "", data.ActualSize())
if err != nil {
return
}
// Initialize pipe to stream data to cache
rPipe, wPipe := io.Pipe()
pinfoCh := make(chan PartInfo)
cinfoCh := make(chan PartInfo)
errorCh := make(chan error)
go func() {
info, err = putObjectPartFn(ctx, bucket, object, uploadID, partID, NewPutObjReader(hashReader), opts)
if err != nil {
close(pinfoCh)
pipeReader.CloseWithError(err)
rPipe.CloseWithError(err)
errorCh <- err
return
}
close(errorCh)
pinfoCh <- info
}()
go func() {
pinfo, perr := dcache.PutObjectPart(GlobalContext, bucket, object, uploadID, partID, rPipe, data.Size(), opts)
if perr != nil {
rPipe.CloseWithError(perr)
close(cinfoCh)
// clean up upload
dcache.AbortUpload(bucket, object, uploadID)
return
}
cinfoCh <- pinfo
}()
mwriter := cacheMultiWriter(pipeWriter, wPipe)
_, err = io.Copy(mwriter, data)
pipeWriter.Close()
wPipe.Close()
if err != nil {
err = <-errorCh
return PartInfo{}, err
}
info = <-pinfoCh
cachedInfo := <-cinfoCh
if info.PartNumber == cachedInfo.PartNumber {
cachedInfo.ETag = info.ETag
cachedInfo.LastModified = info.LastModified
dcache.SavePartMetadata(GlobalContext, bucket, object, uploadID, partID, cachedInfo)
}
return info, err
}
// CopyObjectPart behaves similar to PutObjectPart - caches part to upload dir if writethrough mode is enabled.
func (c *cacheObjects) CopyObjectPart(ctx context.Context, srcBucket, srcObject, dstBucket, dstObject, uploadID string, partID int, startOffset int64, length int64, srcInfo ObjectInfo, srcOpts, dstOpts ObjectOptions) (pi PartInfo, e error) {
copyObjectPartFn := c.InnerCopyObjectPartFn
dcache, err := c.getCacheToLoc(ctx, dstBucket, dstObject)
if err != nil {
// disk cache could not be located,execute backend call.
return copyObjectPartFn(ctx, srcBucket, srcObject, dstBucket, dstObject, uploadID, partID, startOffset, length, srcInfo, srcOpts, dstOpts)
}
if !c.commitWritethrough && !c.commitWriteback {
return copyObjectPartFn(ctx, srcBucket, srcObject, dstBucket, dstObject, uploadID, partID, startOffset, length, srcInfo, srcOpts, dstOpts)
}
if err := dcache.uploadIDExists(dstBucket, dstObject, uploadID); err != nil {
return copyObjectPartFn(ctx, srcBucket, srcObject, dstBucket, dstObject, uploadID, partID, startOffset, length, srcInfo, srcOpts, dstOpts)
}
partInfo, err := copyObjectPartFn(ctx, srcBucket, srcObject, dstBucket, dstObject, uploadID, partID, startOffset, length, srcInfo, srcOpts, dstOpts)
if err != nil {
return pi, toObjectErr(err, dstBucket, dstObject)
}
go func() {
isSuffixLength := false
if startOffset < 0 {
isSuffixLength = true
}
rs := &HTTPRangeSpec{
IsSuffixLength: isSuffixLength,
Start: startOffset,
End: startOffset + length,
}
// fill cache in the background
bReader, bErr := c.InnerGetObjectNInfoFn(GlobalContext, srcBucket, srcObject, rs, http.Header{}, readLock, ObjectOptions{})
if bErr != nil {
return
}
defer bReader.Close()
// avoid cache overwrite if another background routine filled cache
dcache.PutObjectPart(GlobalContext, dstBucket, dstObject, uploadID, partID, bReader, length, ObjectOptions{UserDefined: getMetadata(bReader.ObjInfo)})
}()
// Success.
return partInfo, nil
}
// CompleteMultipartUpload - completes multipart upload operation on the backend. If writethrough mode is enabled, this also
// finalizes the upload saved in cache multipart dir.
func (c *cacheObjects) CompleteMultipartUpload(ctx context.Context, bucket, object, uploadID string, uploadedParts []CompletePart, opts ObjectOptions) (oi ObjectInfo, err error) {
completeMultipartUploadFn := c.InnerCompleteMultipartUploadFn
if !c.commitWritethrough && !c.commitWriteback {
return completeMultipartUploadFn(ctx, bucket, object, uploadID, uploadedParts, opts)
}
dcache, err := c.getCacheToLoc(ctx, bucket, object)
if err != nil {
// disk cache could not be located,execute backend call.
return completeMultipartUploadFn(ctx, bucket, object, uploadID, uploadedParts, opts)
}
// perform multipart upload on backend and cache simultaneously
oi, err = completeMultipartUploadFn(ctx, bucket, object, uploadID, uploadedParts, opts)
if err == nil {
// fill cache in the background
go func() {
_, err := dcache.CompleteMultipartUpload(ctx, bucket, object, uploadID, uploadedParts, oi, opts)
if err != nil {
// fill cache in the background
bReader, bErr := c.InnerGetObjectNInfoFn(GlobalContext, bucket, object, nil, http.Header{}, readLock, ObjectOptions{})
if bErr != nil {
return
}
defer bReader.Close()
oi, _, err := dcache.Stat(GlobalContext, bucket, object)
// avoid cache overwrite if another background routine filled cache
if err != nil || oi.ETag != bReader.ObjInfo.ETag {
dcache.Put(GlobalContext, bucket, object, bReader, bReader.ObjInfo.Size, nil, ObjectOptions{UserDefined: getMetadata(bReader.ObjInfo)}, false, false)
}
}
}()
}
return
}
// AbortMultipartUpload - aborts multipart upload on backend and cache.
func (c *cacheObjects) AbortMultipartUpload(ctx context.Context, bucket, object, uploadID string, opts ObjectOptions) error {
abortMultipartUploadFn := c.InnerAbortMultipartUploadFn
if !c.commitWritethrough && !c.commitWriteback {
return abortMultipartUploadFn(ctx, bucket, object, uploadID, opts)
}
dcache, err := c.getCacheToLoc(ctx, bucket, object)
if err != nil {
// disk cache could not be located,execute backend call.
return abortMultipartUploadFn(ctx, bucket, object, uploadID, opts)
}
if err = dcache.uploadIDExists(bucket, object, uploadID); err != nil {
return toObjectErr(err, bucket, object, uploadID)
}
// execute backend operation
err = abortMultipartUploadFn(ctx, bucket, object, uploadID, opts)
if err != nil {
return err
}
// abort multipart upload on cache
go dcache.AbortUpload(bucket, object, uploadID)
return nil
}

View file

@ -164,7 +164,6 @@ func (api objectAPIHandlers) GetBucketLoggingHandler(w http.ResponseWriter, r *h
// DeleteBucketWebsiteHandler - DELETE bucket website, a dummy api
func (api objectAPIHandlers) DeleteBucketWebsiteHandler(w http.ResponseWriter, r *http.Request) {
writeSuccessResponseHeadersOnly(w)
w.(http.Flusher).Flush()
}
// GetBucketCorsHandler - GET bucket cors, a dummy api

View file

@ -18,8 +18,6 @@
package cmd
import (
"context"
"errors"
"fmt"
"net"
"net/http"
@ -36,7 +34,6 @@ import (
"github.com/dustin/go-humanize"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/internal/config"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/mountinfo"
"github.com/minio/pkg/env"
@ -743,72 +740,6 @@ func GetProxyEndpointLocalIndex(proxyEps []ProxyEndpoint) int {
return -1
}
func httpDo(clnt *http.Client, req *http.Request, f func(*http.Response, error) error) error {
ctx, cancel := context.WithTimeout(GlobalContext, 200*time.Millisecond)
defer cancel()
// Run the HTTP request in a goroutine and pass the response to f.
c := make(chan error, 1)
req = req.WithContext(ctx)
go func() { c <- f(clnt.Do(req)) }()
select {
case <-ctx.Done():
<-c // Wait for f to return.
return ctx.Err()
case err := <-c:
return err
}
}
func getOnlineProxyEndpointIdx() int {
type reqIndex struct {
Request *http.Request
Idx int
}
proxyRequests := make(map[*http.Client]reqIndex, len(globalProxyEndpoints))
for i, proxyEp := range globalProxyEndpoints {
proxyEp := proxyEp
serverURL := &url.URL{
Scheme: proxyEp.Scheme,
Host: proxyEp.Host,
Path: pathJoin(healthCheckPathPrefix, healthCheckLivenessPath),
}
req, err := http.NewRequest(http.MethodGet, serverURL.String(), nil)
if err != nil {
continue
}
proxyRequests[&http.Client{
Transport: proxyEp.Transport,
}] = reqIndex{
Request: req,
Idx: i,
}
}
for c, r := range proxyRequests {
if err := httpDo(c, r.Request, func(resp *http.Response, err error) error {
if err != nil {
return err
}
xhttp.DrainBody(resp.Body)
if resp.StatusCode != http.StatusOK {
return errors.New(resp.Status)
}
if v := resp.Header.Get(xhttp.MinIOServerStatus); v == unavailable {
return errors.New(v)
}
return nil
}); err != nil {
continue
}
return r.Idx
}
return -1
}
// GetProxyEndpoints - get all endpoints that can be used to proxy list request.
func GetProxyEndpoints(endpointServerPools EndpointServerPools) []ProxyEndpoint {
var proxyEps []ProxyEndpoint

View file

@ -172,19 +172,25 @@ func (er erasureObjects) DeleteBucket(ctx context.Context, bucket string, opts D
if err == errErasureWriteQuorum && !opts.NoRecreate {
undoDeleteBucket(storageDisks, bucket)
}
if err != nil {
return toObjectErr(err, bucket)
}
// At this point we have `err == nil` but some errors might be `errVolumeNotEmpty`
// we should proceed to attempt a force delete of such buckets.
for index, err := range dErrs {
if err == errVolumeNotEmpty && storageDisks[index] != nil {
storageDisks[index].RenameFile(ctx, bucket, "", minioMetaTmpDeletedBucket, mustGetUUID())
if err == nil || errors.Is(err, errVolumeNotFound) {
var purgedDangling bool
// At this point we have `err == nil` but some errors might be `errVolumeNotEmpty`
// we should proceed to attempt a force delete of such buckets.
for index, err := range dErrs {
if err == errVolumeNotEmpty && storageDisks[index] != nil {
storageDisks[index].RenameFile(ctx, bucket, "", minioMetaTmpDeletedBucket, mustGetUUID())
purgedDangling = true
}
}
// if we purged dangling buckets, ignore errVolumeNotFound error.
if purgedDangling {
err = nil
}
}
return nil
return toObjectErr(err, bucket)
}
// IsNotificationSupported returns whether bucket notification is applicable for this layer.

View file

@ -1088,37 +1088,64 @@ func (er erasureObjects) DeleteObjects(ctx context.Context, bucket string, objec
writeQuorums[i] = getWriteQuorum(len(storageDisks))
}
versions := make([]FileInfo, len(objects))
versionsMap := make(map[string]FileInfoVersions, len(objects))
for i := range objects {
if objects[i].VersionID == "" {
modTime := opts.MTime
if opts.MTime.IsZero() {
modTime = UTCNow()
}
uuid := opts.VersionID
if uuid == "" {
uuid = mustGetUUID()
}
if opts.Versioned || opts.VersionSuspended {
versions[i] = FileInfo{
Name: objects[i].ObjectName,
ModTime: modTime,
Deleted: true, // delete marker
ReplicationState: objects[i].ReplicationState(),
}
versions[i].SetTierFreeVersionID(mustGetUUID())
if opts.Versioned {
versions[i].VersionID = uuid
}
continue
}
}
versions[i] = FileInfo{
// Construct the FileInfo data that needs to be preserved on the disk.
vr := FileInfo{
Name: objects[i].ObjectName,
VersionID: objects[i].VersionID,
ReplicationState: objects[i].ReplicationState(),
// save the index to set correct error at this index.
Idx: i,
}
versions[i].SetTierFreeVersionID(mustGetUUID())
vr.SetTierFreeVersionID(mustGetUUID())
// VersionID is not set means delete is not specific about
// any version, look for if the bucket is versioned or not.
if objects[i].VersionID == "" {
if opts.Versioned || opts.VersionSuspended {
// Bucket is versioned and no version was explicitly
// mentioned for deletes, create a delete marker instead.
vr.ModTime = UTCNow()
vr.Deleted = true
// Versioning suspended means that we add a `null` version
// delete marker, if not add a new version for this delete
// marker.
if opts.Versioned {
vr.VersionID = mustGetUUID()
}
}
}
// De-dup same object name to collect multiple versions for same object.
v, ok := versionsMap[objects[i].ObjectName]
if ok {
v.Versions = append(v.Versions, vr)
} else {
v = FileInfoVersions{
Name: vr.Name,
Versions: []FileInfo{vr},
}
}
if vr.Deleted {
dobjects[i] = DeletedObject{
DeleteMarker: vr.Deleted,
DeleteMarkerVersionID: vr.VersionID,
DeleteMarkerMTime: DeleteMarkerMTime{vr.ModTime},
ObjectName: vr.Name,
ReplicationState: vr.ReplicationState,
}
} else {
dobjects[i] = DeletedObject{
ObjectName: vr.Name,
VersionID: vr.VersionID,
ReplicationState: vr.ReplicationState,
}
}
versionsMap[objects[i].ObjectName] = v
}
dedupVersions := make([]FileInfoVersions, 0, len(versionsMap))
for _, version := range versionsMap {
dedupVersions = append(dedupVersions, version)
}
// Initialize list of errors.
@ -1130,17 +1157,24 @@ func (er erasureObjects) DeleteObjects(ctx context.Context, bucket string, objec
wg.Add(1)
go func(index int, disk StorageAPI) {
defer wg.Done()
delObjErrs[index] = make([]error, len(objects))
if disk == nil {
delObjErrs[index] = make([]error, len(versions))
for i := range versions {
for i := range objects {
delObjErrs[index][i] = errDiskNotFound
}
return
}
delObjErrs[index] = disk.DeleteVersions(ctx, bucket, versions)
errs := disk.DeleteVersions(ctx, bucket, dedupVersions)
for i, err := range errs {
if err == nil {
continue
}
for _, v := range dedupVersions[i].Versions {
delObjErrs[index][v.Idx] = err
}
}
}(index, disk)
}
wg.Wait()
// Reduce errors for each object
@ -1162,28 +1196,17 @@ func (er erasureObjects) DeleteObjects(ctx context.Context, bucket string, objec
}
if errs[objIndex] == nil {
NSUpdated(bucket, objects[objIndex].ObjectName)
}
if versions[objIndex].Deleted {
dobjects[objIndex] = DeletedObject{
DeleteMarker: versions[objIndex].Deleted,
DeleteMarkerVersionID: versions[objIndex].VersionID,
DeleteMarkerMTime: DeleteMarkerMTime{versions[objIndex].ModTime},
ObjectName: versions[objIndex].Name,
ReplicationState: versions[objIndex].ReplicationState,
}
} else {
dobjects[objIndex] = DeletedObject{
ObjectName: versions[objIndex].Name,
VersionID: versions[objIndex].VersionID,
ReplicationState: versions[objIndex].ReplicationState,
}
defer NSUpdated(bucket, objects[objIndex].ObjectName)
}
}
// Check failed deletes across multiple objects
for _, version := range versions {
for i, dobj := range dobjects {
// This object errored, no need to attempt a heal.
if errs[i] != nil {
continue
}
// Check if there is any offline disk and add it to the MRF list
for _, disk := range storageDisks {
if disk != nil && disk.IsOnline() {
@ -1193,7 +1216,7 @@ func (er erasureObjects) DeleteObjects(ctx context.Context, bucket string, objec
// all other direct versionId references we should
// ensure no dangling file is left over.
er.addPartial(bucket, version.Name, version.VersionID, -1)
er.addPartial(bucket, dobj.ObjectName, dobj.VersionID, -1)
break
}
}
@ -1440,13 +1463,21 @@ func (er erasureObjects) PutObjectMetadata(ctx context.Context, bucket, object s
return ObjectInfo{}, toObjectErr(err, bucket, object)
}
if fi.Deleted {
if opts.VersionID == "" {
return ObjectInfo{}, toObjectErr(errFileNotFound, bucket, object)
}
return ObjectInfo{}, toObjectErr(errMethodNotAllowed, bucket, object)
}
for k, v := range opts.UserDefined {
// if version-id is not specified retention is supposed to be set on the latest object.
if opts.VersionID == "" {
opts.VersionID = fi.VersionID
}
objInfo := fi.ToObjectInfo(bucket, object)
if opts.EvalMetadataFn != nil {
if err := opts.EvalMetadataFn(objInfo); err != nil {
return ObjectInfo{}, err
}
}
for k, v := range objInfo.UserDefined {
fi.Metadata[k] = v
}
fi.ModTime = opts.MTime
@ -1456,9 +1487,7 @@ func (er erasureObjects) PutObjectMetadata(ctx context.Context, bucket, object s
return ObjectInfo{}, toObjectErr(err, bucket, object)
}
objInfo := fi.ToObjectInfo(bucket, object)
return objInfo, nil
return fi.ToObjectInfo(bucket, object), nil
}
// PutObjectTags - replace or add tags to an existing object

View file

@ -202,13 +202,6 @@ func TestErasureDeleteObjectsErasureSet(t *testing.T) {
}
func TestErasureDeleteObjectDiskNotFound(t *testing.T) {
restoreGlobalStorageClass := globalStorageClass
defer func() {
globalStorageClass = restoreGlobalStorageClass
}()
globalStorageClass = storageclass.Config{}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -278,13 +271,6 @@ func TestErasureDeleteObjectDiskNotFound(t *testing.T) {
}
func TestErasureDeleteObjectDiskNotFoundErasure4(t *testing.T) {
restoreGlobalStorageClass := globalStorageClass
defer func() {
globalStorageClass = restoreGlobalStorageClass
}()
globalStorageClass = storageclass.Config{}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -345,13 +331,6 @@ func TestErasureDeleteObjectDiskNotFoundErasure4(t *testing.T) {
}
func TestErasureDeleteObjectDiskNotFoundErr(t *testing.T) {
restoreGlobalStorageClass := globalStorageClass
defer func() {
globalStorageClass = restoreGlobalStorageClass
}()
globalStorageClass = storageclass.Config{}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -807,13 +786,6 @@ func TestObjectQuorumFromMeta(t *testing.T) {
}
func testObjectQuorumFromMeta(obj ObjectLayer, instanceType string, dirs []string, t TestErrHandler) {
restoreGlobalStorageClass := globalStorageClass
defer func() {
globalStorageClass = restoreGlobalStorageClass
}()
globalStorageClass = storageclass.Config{}
bucket := getRandomBucketName()
var opts ObjectOptions

View file

@ -516,7 +516,7 @@ func (a *auditObjectErasureMap) MarshalJSON() ([]byte, error) {
}
func auditObjectErasureSet(ctx context.Context, object string, set *erasureObjects) {
if len(logger.AuditTargets) == 0 {
if len(logger.AuditTargets()) == 0 {
return
}
@ -1213,12 +1213,7 @@ func markRootDisksAsDown(storageDisks []StorageAPI, errs []error) {
// HealFormat - heals missing `format.json` on fresh unformatted disks.
func (s *erasureSets) HealFormat(ctx context.Context, dryRun bool) (res madmin.HealResultItem, err error) {
storageDisks, errs := initStorageDisksWithErrorsWithoutHealthCheck(s.endpoints)
for i, derr := range errs {
if derr != nil && derr != errDiskNotFound {
return madmin.HealResultItem{}, fmt.Errorf("Disk %s: %w", s.endpoints[i], derr)
}
}
storageDisks, _ := initStorageDisksWithErrorsWithoutHealthCheck(s.endpoints)
defer func(storageDisks []StorageAPI) {
if err != nil {
@ -1279,8 +1274,14 @@ func (s *erasureSets) HealFormat(ctx context.Context, dryRun bool) (res madmin.H
}
// Save new formats `format.json` on unformatted disks.
if err = saveUnformattedFormat(ctx, storageDisks, tmpNewFormats); err != nil {
return madmin.HealResultItem{}, err
for index, format := range tmpNewFormats {
if storageDisks[index] == nil || format == nil {
continue
}
if err := saveFormatErasure(storageDisks[index], format, true); err != nil {
logger.LogIf(ctx, fmt.Errorf("Disk %s failed to write updated 'format.json': %v", storageDisks[index], err))
tmpNewFormats[index] = nil // this disk failed to write new format
}
}
s.erasureDisksMu.Lock()

View file

@ -24,6 +24,7 @@ import (
"io"
"github.com/klauspost/reedsolomon"
xioutil "github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/logger"
)
@ -82,7 +83,7 @@ func writeDataBlocks(ctx context.Context, dst io.Writer, enBlocks [][]byte, data
// We have written all the blocks, write the last remaining block.
if write < int64(len(block)) {
n, err := io.Copy(dst, bytes.NewReader(block[:write]))
n, err := xioutil.Copy(dst, bytes.NewReader(block[:write]))
if err != nil {
// The writer will be closed incase of range queries, which will emit ErrClosedPipe.
// The reader pipe might be closed at ListObjects io.EOF ignore it.
@ -96,7 +97,7 @@ func writeDataBlocks(ctx context.Context, dst io.Writer, enBlocks [][]byte, data
}
// Copy the block.
n, err := io.Copy(dst, bytes.NewReader(block))
n, err := xioutil.Copy(dst, bytes.NewReader(block))
if err != nil {
// The writer will be closed incase of range queries, which will emit ErrClosedPipe.
// The reader pipe might be closed at ListObjects io.EOF ignore it.

View file

@ -698,22 +698,6 @@ func initErasureMetaVolumesInLocalDisks(storageDisks []StorageAPI, formats []*fo
return nil
}
// saveUnformattedFormat - populates `format.json` on unformatted disks.
// also adds `.healing.bin` on the disks which are being actively healed.
func saveUnformattedFormat(ctx context.Context, storageDisks []StorageAPI, formats []*formatErasureV3) error {
for index, format := range formats {
if format == nil {
continue
}
if storageDisks[index] != nil {
if err := saveFormatErasure(storageDisks[index], format, true); err != nil {
return err
}
}
}
return nil
}
// saveFormatErasureAll - populates `format.json` on disks in its order.
func saveFormatErasureAll(ctx context.Context, storageDisks []StorageAPI, formats []*formatErasureV3) error {
g := errgroup.WithNErrs(len(storageDisks))

View file

@ -26,6 +26,7 @@ import (
"strings"
"time"
xioutil "github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/lock"
"github.com/minio/minio/internal/logger"
)
@ -334,7 +335,7 @@ func fsCreateFile(ctx context.Context, filePath string, reader io.Reader, falloc
}
defer writer.Close()
bytesWritten, err := io.Copy(writer, reader)
bytesWritten, err := xioutil.Copy(writer, reader)
if err != nil {
logger.LogIf(ctx, err)
return 0, err

View file

@ -56,9 +56,6 @@ var defaultEtag = "00000000000000000000000000000000-1"
type FSObjects struct {
GatewayUnsupported
// The count of concurrent calls on FSObjects API
activeIOCount int64
// Path to be exported over S3 API.
fsPath string
// meta json filename, varies by fs / cache backend.
@ -215,11 +212,6 @@ func (fs *FSObjects) LocalStorageInfo(ctx context.Context) (StorageInfo, []error
// StorageInfo - returns underlying storage statistics.
func (fs *FSObjects) StorageInfo(ctx context.Context) (StorageInfo, []error) {
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
di, err := getDiskInfo(fs.fsPath)
if err != nil {
return StorageInfo{}, []error{err}
@ -444,11 +436,6 @@ func (fs *FSObjects) MakeBucketWithLocation(ctx context.Context, bucket string,
defer NSUpdated(bucket, slashSeparator)
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
bucketDir, err := fs.getBucketDir(ctx, bucket)
if err != nil {
return toObjectErr(err, bucket)
@ -509,11 +496,6 @@ func (fs *FSObjects) DeleteBucketPolicy(ctx context.Context, bucket string) erro
// GetBucketInfo - fetch bucket metadata info.
func (fs *FSObjects) GetBucketInfo(ctx context.Context, bucket string) (bi BucketInfo, e error) {
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
st, err := fs.statBucketDir(ctx, bucket)
if err != nil {
return bi, toObjectErr(err, bucket)
@ -538,11 +520,6 @@ func (fs *FSObjects) ListBuckets(ctx context.Context) ([]BucketInfo, error) {
return nil, err
}
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
entries, err := readDirWithOpts(fs.fsPath, readDirOpts{count: -1, followDirSymlink: true})
if err != nil {
logger.LogIf(ctx, errDiskNotFound)
@ -591,11 +568,6 @@ func (fs *FSObjects) ListBuckets(ctx context.Context) ([]BucketInfo, error) {
func (fs *FSObjects) DeleteBucket(ctx context.Context, bucket string, opts DeleteBucketOptions) error {
defer NSUpdated(bucket, slashSeparator)
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
bucketDir, err := fs.getBucketDir(ctx, bucket)
if err != nil {
return toObjectErr(err, bucket)
@ -656,11 +628,6 @@ func (fs *FSObjects) CopyObject(ctx context.Context, srcBucket, srcObject, dstBu
defer objectDWLock.Unlock(lkctx.Cancel)
}
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
if _, err := fs.statBucketDir(ctx, srcBucket); err != nil {
return oi, toObjectErr(err, srcBucket)
}
@ -734,11 +701,6 @@ func (fs *FSObjects) GetObjectNInfo(ctx context.Context, bucket, object string,
return nil, err
}
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
if _, err = fs.statBucketDir(ctx, bucket); err != nil {
return nil, toObjectErr(err, bucket)
}
@ -998,11 +960,6 @@ func (fs *FSObjects) GetObjectInfo(ctx context.Context, bucket, object string, o
}
}
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
oi, err := fs.getObjectInfoWithLock(ctx, bucket, object)
if err == errCorruptedFormat || err == io.EOF {
lk := fs.NewNSLock(bucket, object)
@ -1049,11 +1006,6 @@ func (fs *FSObjects) PutObject(ctx context.Context, bucket string, object string
ctx = lkctx.Context()
defer lk.Unlock(lkctx.Cancel)
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
return fs.putObject(ctx, bucket, object, r, opts)
}
@ -1221,11 +1173,6 @@ func (fs *FSObjects) DeleteObject(ctx context.Context, bucket, object string, op
return objInfo, err
}
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
if _, err = fs.statBucketDir(ctx, bucket); err != nil {
return objInfo, toObjectErr(err, bucket)
}
@ -1315,11 +1262,6 @@ func (fs *FSObjects) ListObjectVersions(ctx context.Context, bucket, prefix, mar
// ListObjects - list all objects at prefix upto maxKeys., optionally delimited by '/'. Maintains the list pool
// state for future re-entrant list requests.
func (fs *FSObjects) ListObjects(ctx context.Context, bucket, prefix, marker, delimiter string, maxKeys int) (loi ListObjectsInfo, e error) {
atomic.AddInt64(&fs.activeIOCount, 1)
defer func() {
atomic.AddInt64(&fs.activeIOCount, -1)
}()
return listObjects(ctx, fs, bucket, prefix, marker, delimiter, maxKeys, fs.listPool,
fs.listDirFactory(), fs.isLeaf, fs.isLeafDir, fs.getObjectInfoNoFSLock, fs.getObjectInfoNoFSLock)
}

View file

@ -287,7 +287,7 @@ func ErrorRespToObjectError(err error, params ...string) error {
}
if xnet.IsNetworkOrHostDown(err, false) {
return BackendDown{}
return BackendDown{Err: err.Error()}
}
minioErr, ok := err.(minio.ErrorResponse)

View file

@ -269,7 +269,7 @@ func StartGateway(ctx *cli.Context, gw Gateway) {
addrs = append(addrs, globalMinioAddr)
}
httpServer := xhttp.NewServer(addrs, criticalErrorHandler{corsHandler(router)}, getCert)
httpServer := xhttp.NewServer(addrs, setCriticalErrorHandler(corsHandler(router)), getCert)
httpServer.BaseContext = func(listener net.Listener) context.Context {
return GlobalContext
}
@ -306,7 +306,7 @@ func StartGateway(ctx *cli.Context, gw Gateway) {
logger.FatalIf(globalNotificationSys.Init(GlobalContext, buckets, newObject), "Unable to initialize notification system")
}
go globalIAMSys.Init(GlobalContext, newObject, globalEtcdClient)
go globalIAMSys.Init(GlobalContext, newObject, globalEtcdClient, globalRefreshIAMInterval)
if globalCacheConfig.Enabled {
// initialize the new disk cache objects.

View file

@ -28,6 +28,7 @@ import (
"path"
"sort"
"strings"
"sync"
"syscall"
"time"
@ -407,6 +408,7 @@ func (n *hdfsObjects) listDirFactory() minio.ListDirFunc {
// ListObjects lists all blobs in HDFS bucket filtered by prefix.
func (n *hdfsObjects) ListObjects(ctx context.Context, bucket, prefix, marker, delimiter string, maxKeys int) (loi minio.ListObjectsInfo, err error) {
var mutex sync.Mutex
fileInfos := make(map[string]os.FileInfo)
targetPath := n.hdfsPathJoin(bucket, prefix)
@ -430,6 +432,9 @@ func (n *hdfsObjects) ListObjects(ctx context.Context, bucket, prefix, marker, d
}
getObjectInfo := func(ctx context.Context, bucket, entry string) (minio.ObjectInfo, error) {
mutex.Lock()
defer mutex.Unlock()
filePath := path.Clean(n.hdfsPathJoin(bucket, entry))
fi, ok := fileInfos[filePath]

View file

@ -18,10 +18,10 @@
package cmd
import (
"context"
"net"
"net/http"
"path"
"runtime/debug"
"strings"
"sync/atomic"
"time"
@ -29,7 +29,7 @@ import (
"github.com/minio/minio-go/v7/pkg/set"
xnet "github.com/minio/pkg/net"
humanize "github.com/dustin/go-humanize"
"github.com/dustin/go-humanize"
"github.com/minio/minio/internal/config/dns"
"github.com/minio/minio/internal/crypto"
xhttp "github.com/minio/minio/internal/http"
@ -37,42 +37,39 @@ import (
"github.com/minio/minio/internal/logger"
)
// Adds limiting body size middleware
// Maximum allowed form data field values. 64MiB is a guessed practical value
// which is more than enough to accommodate any form data fields and headers.
const requestFormDataSize = 64 * humanize.MiByte
// For any HTTP request, request body should be not more than 16GiB + requestFormDataSize
// where, 16GiB is the maximum allowed object size for object upload.
const requestMaxBodySize = globalMaxObjectSize + requestFormDataSize
func setRequestSizeLimitHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Restricting read data to a given maximum length
r.Body = http.MaxBytesReader(w, r.Body, requestMaxBodySize)
h.ServeHTTP(w, r)
})
}
const (
// Maximum allowed form data field values. 64MiB is a guessed practical value
// which is more than enough to accommodate any form data fields and headers.
requestFormDataSize = 64 * humanize.MiByte
// For any HTTP request, request body should be not more than 16GiB + requestFormDataSize
// where, 16GiB is the maximum allowed object size for object upload.
requestMaxBodySize = globalMaxObjectSize + requestFormDataSize
// Maximum size for http headers - See: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
maxHeaderSize = 8 * 1024
// Maximum size for user-defined metadata - See: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
maxUserDataSize = 2 * 1024
)
// ServeHTTP restricts the size of the http header to 8 KB and the size
// of the user-defined metadata to 2 KB.
func setRequestHeaderSizeLimitHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if isHTTPHeaderSizeTooLarge(r.Header) {
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrMetadataTooLarge), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsHeader, 1)
return
// ReservedMetadataPrefix is the prefix of a metadata key which
// is reserved and for internal use only.
const (
ReservedMetadataPrefix = "X-Minio-Internal-"
ReservedMetadataPrefixLower = "x-minio-internal-"
)
// containsReservedMetadata returns true if the http.Header contains
// keys which are treated as metadata but are reserved for internal use
// and must not set by clients
func containsReservedMetadata(header http.Header) bool {
for key := range header {
if strings.HasPrefix(strings.ToLower(key), ReservedMetadataPrefixLower) {
return true
}
h.ServeHTTP(w, r)
})
}
return false
}
// isHTTPHeaderSizeTooLarge returns true if the provided
@ -96,37 +93,25 @@ func isHTTPHeaderSizeTooLarge(header http.Header) bool {
return false
}
// ReservedMetadataPrefix is the prefix of a metadata key which
// is reserved and for internal use only.
const (
ReservedMetadataPrefix = "X-Minio-Internal-"
ReservedMetadataPrefixLower = "x-minio-internal-"
)
// ServeHTTP fails if the request contains at least one reserved header which
// would be treated as metadata.
func filterReservedMetadata(h http.Handler) http.Handler {
// Limits body and header to specific allowed maximum limits as per S3/MinIO API requirements.
func setRequestLimitHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Reject unsupported reserved metadata first before validation.
if containsReservedMetadata(r.Header) {
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrUnsupportedMetadata), r.URL)
return
}
if isHTTPHeaderSizeTooLarge(r.Header) {
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrMetadataTooLarge), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsHeader, 1)
return
}
// Restricting read data to a given maximum length
r.Body = http.MaxBytesReader(w, r.Body, requestMaxBodySize)
h.ServeHTTP(w, r)
})
}
// containsReservedMetadata returns true if the http.Header contains
// keys which are treated as metadata but are reserved for internal use
// and must not set by clients
func containsReservedMetadata(header http.Header) bool {
for key := range header {
if strings.HasPrefix(strings.ToLower(key), ReservedMetadataPrefixLower) {
return true
}
}
return false
}
// Reserved bucket.
const (
minioReservedBucket = "minio"
@ -134,24 +119,6 @@ const (
loginPathPrefix = SlashSeparator + "login"
)
func setRedirectHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !shouldProxy() || guessIsRPCReq(r) || guessIsBrowserReq(r) ||
guessIsHealthCheckReq(r) || guessIsMetricsReq(r) || isAdminReq(r) {
h.ServeHTTP(w, r)
return
}
// if this server is still initializing, proxy the request
// to any other online servers to avoid 503 for any incoming
// API calls.
if idx := getOnlineProxyEndpointIdx(); idx >= 0 {
proxyRequest(context.TODO(), w, r, globalProxyEndpoints[idx])
return
}
h.ServeHTTP(w, r)
})
}
func guessIsBrowserReq(r *http.Request) bool {
aType := getRequestAuthType(r)
return strings.Contains(r.Header.Get("User-Agent"), "Mozilla") &&
@ -174,10 +141,6 @@ func setBrowserRedirectHandler(h http.Handler) http.Handler {
})
}
func shouldProxy() bool {
return newObjectLayerFn() == nil
}
// Fetch redirect location if urlPath satisfies certain
// criteria. Some special names are considered to be
// redirectable, this is purely internal function and
@ -261,31 +224,21 @@ func isAdminReq(r *http.Request) bool {
return strings.HasPrefix(r.URL.Path, adminPathPrefix)
}
// Adds verification for incoming paths.
func setReservedBucketHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// For all other requests reject access to reserved buckets
bucketName, _ := request2BucketObjectName(r)
if isMinioReservedBucket(bucketName) || isMinioMetaBucket(bucketName) {
if !guessIsRPCReq(r) && !guessIsBrowserReq(r) && !guessIsHealthCheckReq(r) && !guessIsMetricsReq(r) && !isAdminReq(r) {
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrAllAccessDisabled), r.URL)
return
}
}
h.ServeHTTP(w, r)
})
}
// Supported Amz date formats.
// Supported amz date formats.
var amzDateFormats = []string{
// Do not change this order, x-amz-date format is usually in
// iso8601Format rest are meant for relaxed handling of other
// odd SDKs that might be out there.
iso8601Format,
time.RFC1123,
time.RFC1123Z,
iso8601Format,
// Add new AMZ date formats here.
}
// Supported Amz date headers.
var amzDateHeaders = []string{
// Do not chane this order, x-amz-date value should be
// validated first.
"x-amz-date",
"date",
}
@ -314,34 +267,6 @@ func parseAmzDateHeader(req *http.Request) (time.Time, APIErrorCode) {
return time.Time{}, ErrMissingDateHeader
}
// setTimeValidityHandler to validate parsable time over http header
func setTimeValidityHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
aType := getRequestAuthType(r)
if aType == authTypeSigned || aType == authTypeSignedV2 || aType == authTypeStreamingSigned {
// Verify if date headers are set, if not reject the request
amzDate, errCode := parseAmzDateHeader(r)
if errCode != ErrNone {
// All our internal APIs are sensitive towards Date
// header, for all requests where Date header is not
// present we will reject such clients.
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(errCode), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsTime, 1)
return
}
// Verify if the request date header is shifted by less than globalMaxSkewTime parameter in the past
// or in the future, reject request otherwise.
curTime := UTCNow()
if curTime.Sub(amzDate) > globalMaxSkewTime || amzDate.Sub(curTime) > globalMaxSkewTime {
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrRequestTimeTooSkewed), r.URL)
atomic.AddUint64(&globalHTTPStats.rejectedRequestsTime, 1)
return
}
}
h.ServeHTTP(w, r)
})
}
// setHttpStatsHandler sets a http Stats handler to gather HTTP statistics
func setHTTPStatsHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@ -354,11 +279,11 @@ func setHTTPStatsHandler(h http.Handler) http.Handler {
h.ServeHTTP(meteredResponse, r)
if strings.HasPrefix(r.URL.Path, minioReservedBucketPath) {
globalConnStats.incInputBytes(meteredRequest.BytesCount())
globalConnStats.incOutputBytes(meteredResponse.BytesCount())
globalConnStats.incInputBytes(meteredRequest.BytesRead())
globalConnStats.incOutputBytes(meteredResponse.BytesWritten())
} else {
globalConnStats.incS3InputBytes(meteredRequest.BytesCount())
globalConnStats.incS3OutputBytes(meteredResponse.BytesCount())
globalConnStats.incS3InputBytes(meteredRequest.BytesRead())
globalConnStats.incS3OutputBytes(meteredResponse.BytesWritten())
}
})
}
@ -420,6 +345,23 @@ func setRequestValidityHandler(h http.Handler) http.Handler {
atomic.AddUint64(&globalHTTPStats.rejectedRequestsInvalid, 1)
return
}
// For all other requests reject access to reserved buckets
bucketName, _ := request2BucketObjectName(r)
if isMinioReservedBucket(bucketName) || isMinioMetaBucket(bucketName) {
if !guessIsRPCReq(r) && !guessIsBrowserReq(r) && !guessIsHealthCheckReq(r) && !guessIsMetricsReq(r) && !isAdminReq(r) {
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrAllAccessDisabled), r.URL)
return
}
}
// Deny SSE-C requests if not made over TLS
if !globalIsTLS && (crypto.SSEC.IsRequested(r.Header) || crypto.SSECopy.IsRequested(r.Header)) {
if r.Method == http.MethodHead {
writeErrorResponseHeadersOnly(w, errorCodes.ToAPIErr(ErrInsecureSSECustomerRequest))
} else {
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrInsecureSSECustomerRequest), r.URL)
}
return
}
h.ServeHTTP(w, r)
})
}
@ -429,8 +371,7 @@ func setRequestValidityHandler(h http.Handler) http.Handler {
// is obtained from centralized etcd configuration service.
func setBucketForwardingHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if globalDNSConfig == nil || len(globalDomainNames) == 0 || !globalBucketFederation ||
guessIsHealthCheckReq(r) || guessIsMetricsReq(r) ||
if guessIsHealthCheckReq(r) || guessIsMetricsReq(r) ||
guessIsRPCReq(r) || guessIsLoginSTSReq(r) || isAdminReq(r) {
h.ServeHTTP(w, r)
return
@ -491,62 +432,46 @@ func setBucketForwardingHandler(h http.Handler) http.Handler {
})
}
// customHeaderHandler sets x-amz-request-id header.
// Previously, this value was set right before a response was sent to
// the client. So, logger and Error response XML were not using this
// value. This is set here so that this header can be logged as
// part of the log entry, Error response XML and auditing.
func addCustomHeaders(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Set custom headers such as x-amz-request-id for each request.
w.Header().Set(xhttp.AmzRequestID, mustGetRequestID(UTCNow()))
h.ServeHTTP(logger.NewResponseWriter(w), r)
})
}
// addSecurityHeaders adds various HTTP(S) response headers.
// addCustomHeaders adds various HTTP(S) response headers.
// Security Headers enable various security protections behaviors in the client's browser.
func addSecurityHeaders(h http.Handler) http.Handler {
func addCustomHeaders(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
header := w.Header()
header.Set("X-XSS-Protection", "1; mode=block") // Prevents against XSS attacks
header.Set("Content-Security-Policy", "block-all-mixed-content") // prevent mixed (HTTP / HTTPS content)
header.Set("X-Content-Type-Options", "nosniff") // Prevent mime-sniff
header.Set("Strict-Transport-Security", "max-age=31536000; includeSubDomains") // HSTS mitigates variants of MITM attacks
h.ServeHTTP(w, r)
// Previously, this value was set right before a response was sent to
// the client. So, logger and Error response XML were not using this
// value. This is set here so that this header can be logged as
// part of the log entry, Error response XML and auditing.
// Set custom headers such as x-amz-request-id for each request.
w.Header().Set(xhttp.AmzRequestID, mustGetRequestID(UTCNow()))
h.ServeHTTP(logger.NewResponseWriter(w), r)
})
}
// criticalErrorHandler handles critical server failures caused by
// criticalErrorHandler handles panics and fatal errors by
// `panic(logger.ErrCritical)` as done by `logger.CriticalIf`.
//
// It should be always the first / highest HTTP handler.
type criticalErrorHandler struct{ handler http.Handler }
func (h criticalErrorHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
defer func() {
if err := recover(); err == logger.ErrCritical { // handle
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrInternalError), r.URL)
return
} else if err != nil {
panic(err) // forward other panic calls
}
}()
h.handler.ServeHTTP(w, r)
}
// sseTLSHandler enforces certain rules for SSE requests which are made / must be made over TLS.
func setSSETLSHandler(h http.Handler) http.Handler {
func setCriticalErrorHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Deny SSE-C requests if not made over TLS
if !globalIsTLS && (crypto.SSEC.IsRequested(r.Header) || crypto.SSECopy.IsRequested(r.Header)) {
if r.Method == http.MethodHead {
writeErrorResponseHeadersOnly(w, errorCodes.ToAPIErr(ErrInsecureSSECustomerRequest))
} else {
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrInsecureSSECustomerRequest), r.URL)
defer func() {
if rec := recover(); rec == logger.ErrCritical { // handle
stack := debug.Stack()
logger.Error("critical: \"%s %s\": %v\n%s", r.Method, r.URL, rec, string(stack))
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrInternalError), r.URL)
return
} else if rec != nil {
stack := debug.Stack()
logger.Error("panic: \"%s %s\": %v\n%s", r.Method, r.URL, rec, string(stack))
// Try to write an error response, upstream may not have written header.
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrInternalError), r.URL)
return
}
return
}
}()
h.ServeHTTP(w, r)
})
}

View file

@ -155,7 +155,7 @@ func TestSSETLSHandler(t *testing.T) {
r.Header = test.Header
r.URL = test.URL
h := setSSETLSHandler(okHandler)
h := setRequestValidityHandler(okHandler)
h.ServeHTTP(w, r)
switch {

View file

@ -180,7 +180,7 @@ func (er *erasureObjects) healErasureSet(ctx context.Context, buckets []BucketIn
// If we resume to the same bucket, forward to last known item.
if tracker.Bucket != "" {
if tracker.Bucket == bucket.Name {
forwardTo = tracker.Bucket
forwardTo = tracker.Object
} else {
// Reset to where last bucket ended if resuming.
tracker.resume()

View file

@ -28,6 +28,10 @@ import (
const unavailable = "offline"
func shouldProxy() bool {
return newObjectLayerFn() == nil
}
// ClusterCheckHandler returns if the server is ready for requests.
func ClusterCheckHandler(w http.ResponseWriter, r *http.Request) {
if globalIsGateway {

View file

@ -178,8 +178,8 @@ func (st *HTTPStats) toServerHTTPStats() ServerHTTPStats {
// Update statistics from http request and response data
func (st *HTTPStats) updateStats(api string, r *http.Request, w *logger.ResponseWriter) {
// A successful request has a 2xx response code
successReq := w.StatusCode >= 200 && w.StatusCode < 300
// A successful request has a 2xx response code or < 4xx response
successReq := w.StatusCode >= 200 && w.StatusCode < 400
if !strings.HasSuffix(r.URL.Path, prometheusMetricsPathLegacy) ||
!strings.HasSuffix(r.URL.Path, prometheusMetricsV2ClusterPath) ||
@ -189,7 +189,7 @@ func (st *HTTPStats) updateStats(api string, r *http.Request, w *logger.Response
switch w.StatusCode {
case 0:
case 499:
// 499 is a good error, shall be counted at canceled.
// 499 is a good error, shall be counted as canceled.
st.totalS3Canceled.Inc(api)
default:
st.totalS3Errors.Inc(api)

View file

@ -27,22 +27,37 @@ import (
type iamDummyStore struct {
sync.RWMutex
*iamCache
usersSysType UsersSysType
}
func (ids *iamDummyStore) lock() {
func newIAMDummyStore(usersSysType UsersSysType) *iamDummyStore {
return &iamDummyStore{
iamCache: newIamCache(),
usersSysType: usersSysType,
}
}
func (ids *iamDummyStore) rlock() *iamCache {
ids.RLock()
return ids.iamCache
}
func (ids *iamDummyStore) runlock() {
ids.RUnlock()
}
func (ids *iamDummyStore) lock() *iamCache {
ids.Lock()
return ids.iamCache
}
func (ids *iamDummyStore) unlock() {
ids.Unlock()
}
func (ids *iamDummyStore) rlock() {
ids.RLock()
}
func (ids *iamDummyStore) runlock() {
ids.RUnlock()
func (ids *iamDummyStore) getUsersSysType() UsersSysType {
return ids.usersSysType
}
func (ids *iamDummyStore) migrateBackendFormat(context.Context) error {

View file

@ -62,27 +62,41 @@ func extractPathPrefixAndSuffix(s string, prefix string, suffix string) string {
type IAMEtcdStore struct {
sync.RWMutex
*iamCache
usersSysType UsersSysType
client *etcd.Client
}
func newIAMEtcdStore(client *etcd.Client) *IAMEtcdStore {
return &IAMEtcdStore{client: client}
func newIAMEtcdStore(client *etcd.Client, usersSysType UsersSysType) *IAMEtcdStore {
return &IAMEtcdStore{
iamCache: newIamCache(),
client: client,
usersSysType: usersSysType,
}
}
func (ies *IAMEtcdStore) lock() {
func (ies *IAMEtcdStore) rlock() *iamCache {
ies.RLock()
return ies.iamCache
}
func (ies *IAMEtcdStore) runlock() {
ies.RUnlock()
}
func (ies *IAMEtcdStore) lock() *iamCache {
ies.Lock()
return ies.iamCache
}
func (ies *IAMEtcdStore) unlock() {
ies.Unlock()
}
func (ies *IAMEtcdStore) rlock() {
ies.RLock()
}
func (ies *IAMEtcdStore) runlock() {
ies.RUnlock()
func (ies *IAMEtcdStore) getUsersSysType() UsersSysType {
return ies.usersSysType
}
func (ies *IAMEtcdStore) saveIAMConfig(ctx context.Context, item interface{}, itemPath string, opts ...options) error {
@ -244,6 +258,8 @@ func (ies *IAMEtcdStore) migrateToV1(ctx context.Context) error {
// Should be called under config migration lock
func (ies *IAMEtcdStore) migrateBackendFormat(ctx context.Context) error {
ies.Lock()
defer ies.Unlock()
return ies.migrateToV1(ctx)
}
@ -260,7 +276,7 @@ func (ies *IAMEtcdStore) loadPolicyDoc(ctx context.Context, policy string, m map
return nil
}
func (ies *IAMEtcdStore) getPolicyDoc(ctx context.Context, kvs *mvccpb.KeyValue, m map[string]iampolicy.Policy) error {
func (ies *IAMEtcdStore) getPolicyDocKV(ctx context.Context, kvs *mvccpb.KeyValue, m map[string]iampolicy.Policy) error {
var p iampolicy.Policy
err := getIAMConfig(&p, kvs.Value, string(kvs.Key))
if err != nil {
@ -286,14 +302,14 @@ func (ies *IAMEtcdStore) loadPolicyDocs(ctx context.Context, m map[string]iampol
// Parse all values to construct the policies data model.
for _, kvs := range r.Kvs {
if err = ies.getPolicyDoc(ctx, kvs, m); err != nil && err != errNoSuchPolicy {
if err = ies.getPolicyDocKV(ctx, kvs, m); err != nil && err != errNoSuchPolicy {
return err
}
}
return nil
}
func (ies *IAMEtcdStore) getUser(ctx context.Context, userkv *mvccpb.KeyValue, userType IAMUserType, m map[string]auth.Credentials, basePrefix string) error {
func (ies *IAMEtcdStore) getUserKV(ctx context.Context, userkv *mvccpb.KeyValue, userType IAMUserType, m map[string]auth.Credentials, basePrefix string) error {
var u UserIdentity
err := getIAMConfig(&u, userkv.Value, string(userkv.Key))
if err != nil {
@ -355,7 +371,7 @@ func (ies *IAMEtcdStore) loadUsers(ctx context.Context, userType IAMUserType, m
// Parse all users values to create the proper data model
for _, userKv := range r.Kvs {
if err = ies.getUser(ctx, userKv, userType, m, basePrefix); err != nil && err != errNoSuchUser {
if err = ies.getUserKV(ctx, userKv, userType, m, basePrefix); err != nil && err != errNoSuchUser {
return err
}
}

View file

@ -34,30 +34,44 @@ import (
// IAMObjectStore implements IAMStorageAPI
type IAMObjectStore struct {
// Protect assignment to objAPI
// Protect access to storage within the current server.
sync.RWMutex
*iamCache
usersSysType UsersSysType
objAPI ObjectLayer
}
func newIAMObjectStore(objAPI ObjectLayer) *IAMObjectStore {
return &IAMObjectStore{objAPI: objAPI}
func newIAMObjectStore(objAPI ObjectLayer, usersSysType UsersSysType) *IAMObjectStore {
return &IAMObjectStore{
iamCache: newIamCache(),
objAPI: objAPI,
usersSysType: usersSysType,
}
}
func (iamOS *IAMObjectStore) lock() {
func (iamOS *IAMObjectStore) rlock() *iamCache {
iamOS.RLock()
return iamOS.iamCache
}
func (iamOS *IAMObjectStore) runlock() {
iamOS.RUnlock()
}
func (iamOS *IAMObjectStore) lock() *iamCache {
iamOS.Lock()
return iamOS.iamCache
}
func (iamOS *IAMObjectStore) unlock() {
iamOS.Unlock()
}
func (iamOS *IAMObjectStore) rlock() {
iamOS.RLock()
}
func (iamOS *IAMObjectStore) runlock() {
iamOS.RUnlock()
func (iamOS *IAMObjectStore) getUsersSysType() UsersSysType {
return iamOS.usersSysType
}
// Migrate users directory in a single scan.
@ -182,6 +196,8 @@ func (iamOS *IAMObjectStore) migrateToV1(ctx context.Context) error {
// Should be called under config migration lock
func (iamOS *IAMObjectStore) migrateBackendFormat(ctx context.Context) error {
iamOS.Lock()
defer iamOS.Unlock()
return iamOS.migrateToV1(ctx)
}

1716
cmd/iam-store.go Normal file

File diff suppressed because it is too large Load diff

1864
cmd/iam.go

File diff suppressed because it is too large Load diff

View file

@ -22,8 +22,8 @@ import (
"net/http"
"time"
jwtgo "github.com/golang-jwt/jwt"
jwtreq "github.com/golang-jwt/jwt/request"
jwtgo "github.com/golang-jwt/jwt/v4"
jwtreq "github.com/golang-jwt/jwt/v4/request"
"github.com/minio/minio/internal/auth"
xjwt "github.com/minio/minio/internal/jwt"
"github.com/minio/minio/internal/logger"

View file

@ -22,7 +22,7 @@ import (
"os"
"testing"
jwtgo "github.com/golang-jwt/jwt"
jwtgo "github.com/golang-jwt/jwt/v4"
"github.com/minio/minio/internal/auth"
xjwt "github.com/minio/minio/internal/jwt"
)

View file

@ -190,12 +190,12 @@ func (l *localLocker) RLock(ctx context.Context, args dsync.LockArgs) (reply boo
if reply = !isWriteLock(lri); reply {
// Unless there is a write lock
l.lockMap[resource] = append(l.lockMap[resource], lrInfo)
l.lockUID[args.UID] = formatUUID(resource, 0)
l.lockUID[formatUUID(args.UID, 0)] = resource
}
} else {
// No locks held on the given name, so claim (first) read lock
l.lockMap[resource] = []lockRequesterInfo{lrInfo}
l.lockUID[args.UID] = formatUUID(resource, 0)
l.lockUID[formatUUID(args.UID, 0)] = resource
reply = true
}
return reply, nil

256
cmd/local-locker_test.go Normal file
View file

@ -0,0 +1,256 @@
// Copyright (c) 2015-2021 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"context"
"testing"
"time"
"github.com/minio/minio/internal/dsync"
)
func TestLocalLockerExpire(t *testing.T) {
wResources := make([]string, 1000)
rResources := make([]string, 1000)
l := newLocker()
ctx := context.Background()
for i := range wResources {
arg := dsync.LockArgs{
UID: mustGetUUID(),
Resources: []string{mustGetUUID()},
Source: t.Name(),
Owner: "owner",
Quorum: 0,
}
ok, err := l.Lock(ctx, arg)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("did not get write lock")
}
wResources[i] = arg.Resources[0]
}
for i := range rResources {
name := mustGetUUID()
arg := dsync.LockArgs{
UID: mustGetUUID(),
Resources: []string{name},
Source: t.Name(),
Owner: "owner",
Quorum: 0,
}
ok, err := l.RLock(ctx, arg)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("did not get write lock")
}
// RLock twice
ok, err = l.RLock(ctx, arg)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("did not get write lock")
}
rResources[i] = arg.Resources[0]
}
if len(l.lockMap) != len(rResources)+len(wResources) {
t.Fatalf("lockmap len, got %d, want %d + %d", len(l.lockMap), len(rResources), len(wResources))
}
if len(l.lockUID) != len(rResources)+len(wResources) {
t.Fatalf("lockUID len, got %d, want %d + %d", len(l.lockUID), len(rResources), len(wResources))
}
// Expire an hour from now, should keep all
l.expireOldLocks(time.Hour)
if len(l.lockMap) != len(rResources)+len(wResources) {
t.Fatalf("lockmap len, got %d, want %d + %d", len(l.lockMap), len(rResources), len(wResources))
}
if len(l.lockUID) != len(rResources)+len(wResources) {
t.Fatalf("lockUID len, got %d, want %d + %d", len(l.lockUID), len(rResources), len(wResources))
}
// Expire a minute ago.
l.expireOldLocks(-time.Minute)
if len(l.lockMap) != 0 {
t.Fatalf("after cleanup should be empty, got %d", len(l.lockMap))
}
if len(l.lockUID) != 0 {
t.Fatalf("lockUID len, got %d, want %d", len(l.lockUID), 0)
}
}
func TestLocalLockerUnlock(t *testing.T) {
const n = 1000
const m = 5
wResources := make([][m]string, n)
rResources := make([]string, n)
wUIDs := make([]string, n)
rUIDs := make([]string, 0, n*2)
l := newLocker()
ctx := context.Background()
for i := range wResources {
names := [m]string{}
for j := range names {
names[j] = mustGetUUID()
}
uid := mustGetUUID()
arg := dsync.LockArgs{
UID: uid,
Resources: names[:],
Source: t.Name(),
Owner: "owner",
Quorum: 0,
}
ok, err := l.Lock(ctx, arg)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("did not get write lock")
}
wResources[i] = names
wUIDs[i] = uid
}
for i := range rResources {
name := mustGetUUID()
uid := mustGetUUID()
arg := dsync.LockArgs{
UID: uid,
Resources: []string{name},
Source: t.Name(),
Owner: "owner",
Quorum: 0,
}
ok, err := l.RLock(ctx, arg)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("did not get write lock")
}
rUIDs = append(rUIDs, uid)
// RLock twice, different uid
uid = mustGetUUID()
arg.UID = uid
ok, err = l.RLock(ctx, arg)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("did not get write lock")
}
rResources[i] = name
rUIDs = append(rUIDs, uid)
}
// Each Lock has m entries
if len(l.lockMap) != len(rResources)+len(wResources)*m {
t.Fatalf("lockmap len, got %d, want %d + %d", len(l.lockMap), len(rResources), len(wResources)*m)
}
// A UID is added for every resource
if len(l.lockUID) != len(rResources)*2+len(wResources)*m {
t.Fatalf("lockUID len, got %d, want %d + %d", len(l.lockUID), len(rResources)*2, len(wResources)*m)
}
// RUnlock once...
for i, name := range rResources {
arg := dsync.LockArgs{
UID: rUIDs[i*2],
Resources: []string{name},
Source: t.Name(),
Owner: "owner",
Quorum: 0,
}
ok, err := l.RUnlock(ctx, arg)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("did not get write lock")
}
}
// Each Lock has m entries
if len(l.lockMap) != len(rResources)+len(wResources)*m {
t.Fatalf("lockmap len, got %d, want %d + %d", len(l.lockMap), len(rResources), len(wResources)*m)
}
// A UID is added for every resource.
// We removed len(rResources) read sources.
if len(l.lockUID) != len(rResources)+len(wResources)*m {
t.Fatalf("lockUID len, got %d, want %d + %d", len(l.lockUID), len(rResources), len(wResources)*m)
}
// RUnlock again, different uids
for i, name := range rResources {
arg := dsync.LockArgs{
UID: rUIDs[i*2+1],
Resources: []string{name},
Source: "minio",
Owner: "owner",
Quorum: 0,
}
ok, err := l.RUnlock(ctx, arg)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("did not get write lock")
}
}
// Each Lock has m entries
if len(l.lockMap) != 0+len(wResources)*m {
t.Fatalf("lockmap len, got %d, want %d + %d", len(l.lockMap), 0, len(wResources)*m)
}
// A UID is added for every resource.
// We removed Add Rlocked entries
if len(l.lockUID) != len(wResources)*m {
t.Fatalf("lockUID len, got %d, want %d + %d", len(l.lockUID), 0, len(wResources)*m)
}
// Remove write locked
for i, names := range wResources {
arg := dsync.LockArgs{
UID: wUIDs[i],
Resources: names[:],
Source: "minio",
Owner: "owner",
Quorum: 0,
}
ok, err := l.Unlock(ctx, arg)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("did not get write lock")
}
}
// All should be gone now...
// Each Lock has m entries
if len(l.lockMap) != 0 {
t.Fatalf("lockmap len, got %d, want %d + %d", len(l.lockMap), 0, 0)
}
if len(l.lockUID) != 0 {
t.Fatalf("lockUID len, got %d, want %d + %d", len(l.lockUID), 0, 0)
}
}

View file

@ -56,14 +56,13 @@ const metacacheStreamVersion = 2
// metacacheWriter provides a serializer of metacache objects.
type metacacheWriter struct {
streamErr error
mw *msgp.Writer
creator func() error
closer func() error
blockSize int
streamWg sync.WaitGroup
reuseBlocks bool
streamErr error
streamWg sync.WaitGroup
}
// newMetacacheWriter will create a serializer that will write objects in given order to the output.

View file

@ -19,9 +19,11 @@ package cmd
import (
"context"
"fmt"
"io"
"net/http"
"net/url"
"runtime/debug"
"sort"
"strconv"
"strings"
@ -114,6 +116,7 @@ func (s *xlStorage) WalkDir(ctx context.Context, opts WalkDirOptions, wr io.Writ
forward := ""
if len(opts.ForwardTo) > 0 && strings.HasPrefix(opts.ForwardTo, current) {
forward = strings.TrimPrefix(opts.ForwardTo, current)
// Trim further directories and trailing slash.
if idx := strings.IndexByte(forward, '/'); idx > 0 {
forward = forward[:idx]
}
@ -161,7 +164,7 @@ func (s *xlStorage) WalkDir(ctx context.Context, opts WalkDirOptions, wr io.Writ
entries[i] = entry
continue
}
// Trim slash, maybe compiler is clever?
// Trim slash, since we don't know if this is folder or object.
entries[i] = entries[i][:len(entry)-1]
continue
}
@ -212,9 +215,12 @@ func (s *xlStorage) WalkDir(ctx context.Context, opts WalkDirOptions, wr io.Writ
dirStack := make([]string, 0, 5)
prefix = "" // Remove prefix after first level as we have already filtered the list.
if len(forward) > 0 {
idx := sort.SearchStrings(entries, forward)
if idx > 0 {
entries = entries[idx:]
// Conservative forwarding. Entries may be either objects or prefixes.
for i, entry := range entries {
if entry >= forward || strings.HasPrefix(forward, entry) {
entries = entries[i:]
break
}
}
}
@ -357,6 +363,12 @@ func (s *storageRESTServer) WalkDirHandler(w http.ResponseWriter, r *http.Reques
prefix := r.Form.Get(storageRESTPrefixFilter)
forward := r.Form.Get(storageRESTForwardFilter)
writer := streamHTTPResponse(w)
defer func() {
if r := recover(); r != nil {
debug.PrintStack()
writer.CloseWithError(fmt.Errorf("panic: %v", r))
}
}()
writer.CloseWithError(s.storage.WalkDir(r.Context(), WalkDirOptions{
Bucket: volume,
BaseDir: dirPath,

View file

@ -56,18 +56,21 @@ const (
// metacache contains a tracked cache entry.
type metacache struct {
id string `msg:"id"`
bucket string `msg:"b"`
root string `msg:"root"`
recursive bool `msg:"rec"`
filter string `msg:"flt"`
status scanStatus `msg:"stat"`
fileNotFound bool `msg:"fnf"`
error string `msg:"err"`
started time.Time `msg:"st"`
// do not re-arrange the struct this struct has been ordered to use less
// space - if you do so please run https://github.com/orijtech/structslop
// and verify if your changes are optimal.
ended time.Time `msg:"end"`
lastUpdate time.Time `msg:"u"`
started time.Time `msg:"st"`
lastHandout time.Time `msg:"lh"`
lastUpdate time.Time `msg:"u"`
bucket string `msg:"b"`
filter string `msg:"flt"`
id string `msg:"id"`
error string `msg:"err"`
root string `msg:"root"`
fileNotFound bool `msg:"fnf"`
status scanStatus `msg:"stat"`
recursive bool `msg:"rec"`
dataVersion uint8 `msg:"v"`
}

View file

@ -24,10 +24,28 @@ func (z *metacache) DecodeMsg(dc *msgp.Reader) (err error) {
return
}
switch msgp.UnsafeString(field) {
case "id":
z.id, err = dc.ReadString()
case "end":
z.ended, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "id")
err = msgp.WrapError(err, "ended")
return
}
case "st":
z.started, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "started")
return
}
case "lh":
z.lastHandout, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "lastHandout")
return
}
case "u":
z.lastUpdate, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "lastUpdate")
return
}
case "b":
@ -36,22 +54,34 @@ func (z *metacache) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "bucket")
return
}
case "flt":
z.filter, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "filter")
return
}
case "id":
z.id, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "id")
return
}
case "err":
z.error, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "error")
return
}
case "root":
z.root, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "root")
return
}
case "rec":
z.recursive, err = dc.ReadBool()
case "fnf":
z.fileNotFound, err = dc.ReadBool()
if err != nil {
err = msgp.WrapError(err, "recursive")
return
}
case "flt":
z.filter, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "filter")
err = msgp.WrapError(err, "fileNotFound")
return
}
case "stat":
@ -64,40 +94,10 @@ func (z *metacache) DecodeMsg(dc *msgp.Reader) (err error) {
}
z.status = scanStatus(zb0002)
}
case "fnf":
z.fileNotFound, err = dc.ReadBool()
case "rec":
z.recursive, err = dc.ReadBool()
if err != nil {
err = msgp.WrapError(err, "fileNotFound")
return
}
case "err":
z.error, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "error")
return
}
case "st":
z.started, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "started")
return
}
case "end":
z.ended, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "ended")
return
}
case "u":
z.lastUpdate, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "lastUpdate")
return
}
case "lh":
z.lastHandout, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "lastHandout")
err = msgp.WrapError(err, "recursive")
return
}
case "v":
@ -120,84 +120,14 @@ func (z *metacache) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *metacache) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 13
// write "id"
err = en.Append(0x8d, 0xa2, 0x69, 0x64)
// write "end"
err = en.Append(0x8d, 0xa3, 0x65, 0x6e, 0x64)
if err != nil {
return
}
err = en.WriteString(z.id)
err = en.WriteTime(z.ended)
if err != nil {
err = msgp.WrapError(err, "id")
return
}
// write "b"
err = en.Append(0xa1, 0x62)
if err != nil {
return
}
err = en.WriteString(z.bucket)
if err != nil {
err = msgp.WrapError(err, "bucket")
return
}
// write "root"
err = en.Append(0xa4, 0x72, 0x6f, 0x6f, 0x74)
if err != nil {
return
}
err = en.WriteString(z.root)
if err != nil {
err = msgp.WrapError(err, "root")
return
}
// write "rec"
err = en.Append(0xa3, 0x72, 0x65, 0x63)
if err != nil {
return
}
err = en.WriteBool(z.recursive)
if err != nil {
err = msgp.WrapError(err, "recursive")
return
}
// write "flt"
err = en.Append(0xa3, 0x66, 0x6c, 0x74)
if err != nil {
return
}
err = en.WriteString(z.filter)
if err != nil {
err = msgp.WrapError(err, "filter")
return
}
// write "stat"
err = en.Append(0xa4, 0x73, 0x74, 0x61, 0x74)
if err != nil {
return
}
err = en.WriteUint8(uint8(z.status))
if err != nil {
err = msgp.WrapError(err, "status")
return
}
// write "fnf"
err = en.Append(0xa3, 0x66, 0x6e, 0x66)
if err != nil {
return
}
err = en.WriteBool(z.fileNotFound)
if err != nil {
err = msgp.WrapError(err, "fileNotFound")
return
}
// write "err"
err = en.Append(0xa3, 0x65, 0x72, 0x72)
if err != nil {
return
}
err = en.WriteString(z.error)
if err != nil {
err = msgp.WrapError(err, "error")
err = msgp.WrapError(err, "ended")
return
}
// write "st"
@ -210,14 +140,14 @@ func (z *metacache) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "started")
return
}
// write "end"
err = en.Append(0xa3, 0x65, 0x6e, 0x64)
// write "lh"
err = en.Append(0xa2, 0x6c, 0x68)
if err != nil {
return
}
err = en.WriteTime(z.ended)
err = en.WriteTime(z.lastHandout)
if err != nil {
err = msgp.WrapError(err, "ended")
err = msgp.WrapError(err, "lastHandout")
return
}
// write "u"
@ -230,14 +160,84 @@ func (z *metacache) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "lastUpdate")
return
}
// write "lh"
err = en.Append(0xa2, 0x6c, 0x68)
// write "b"
err = en.Append(0xa1, 0x62)
if err != nil {
return
}
err = en.WriteTime(z.lastHandout)
err = en.WriteString(z.bucket)
if err != nil {
err = msgp.WrapError(err, "lastHandout")
err = msgp.WrapError(err, "bucket")
return
}
// write "flt"
err = en.Append(0xa3, 0x66, 0x6c, 0x74)
if err != nil {
return
}
err = en.WriteString(z.filter)
if err != nil {
err = msgp.WrapError(err, "filter")
return
}
// write "id"
err = en.Append(0xa2, 0x69, 0x64)
if err != nil {
return
}
err = en.WriteString(z.id)
if err != nil {
err = msgp.WrapError(err, "id")
return
}
// write "err"
err = en.Append(0xa3, 0x65, 0x72, 0x72)
if err != nil {
return
}
err = en.WriteString(z.error)
if err != nil {
err = msgp.WrapError(err, "error")
return
}
// write "root"
err = en.Append(0xa4, 0x72, 0x6f, 0x6f, 0x74)
if err != nil {
return
}
err = en.WriteString(z.root)
if err != nil {
err = msgp.WrapError(err, "root")
return
}
// write "fnf"
err = en.Append(0xa3, 0x66, 0x6e, 0x66)
if err != nil {
return
}
err = en.WriteBool(z.fileNotFound)
if err != nil {
err = msgp.WrapError(err, "fileNotFound")
return
}
// write "stat"
err = en.Append(0xa4, 0x73, 0x74, 0x61, 0x74)
if err != nil {
return
}
err = en.WriteUint8(uint8(z.status))
if err != nil {
err = msgp.WrapError(err, "status")
return
}
// write "rec"
err = en.Append(0xa3, 0x72, 0x65, 0x63)
if err != nil {
return
}
err = en.WriteBool(z.recursive)
if err != nil {
err = msgp.WrapError(err, "recursive")
return
}
// write "v"
@ -257,42 +257,42 @@ func (z *metacache) EncodeMsg(en *msgp.Writer) (err error) {
func (z *metacache) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 13
// string "id"
o = append(o, 0x8d, 0xa2, 0x69, 0x64)
o = msgp.AppendString(o, z.id)
// string "b"
o = append(o, 0xa1, 0x62)
o = msgp.AppendString(o, z.bucket)
// string "root"
o = append(o, 0xa4, 0x72, 0x6f, 0x6f, 0x74)
o = msgp.AppendString(o, z.root)
// string "rec"
o = append(o, 0xa3, 0x72, 0x65, 0x63)
o = msgp.AppendBool(o, z.recursive)
// string "flt"
o = append(o, 0xa3, 0x66, 0x6c, 0x74)
o = msgp.AppendString(o, z.filter)
// string "stat"
o = append(o, 0xa4, 0x73, 0x74, 0x61, 0x74)
o = msgp.AppendUint8(o, uint8(z.status))
// string "fnf"
o = append(o, 0xa3, 0x66, 0x6e, 0x66)
o = msgp.AppendBool(o, z.fileNotFound)
// string "err"
o = append(o, 0xa3, 0x65, 0x72, 0x72)
o = msgp.AppendString(o, z.error)
// string "end"
o = append(o, 0x8d, 0xa3, 0x65, 0x6e, 0x64)
o = msgp.AppendTime(o, z.ended)
// string "st"
o = append(o, 0xa2, 0x73, 0x74)
o = msgp.AppendTime(o, z.started)
// string "end"
o = append(o, 0xa3, 0x65, 0x6e, 0x64)
o = msgp.AppendTime(o, z.ended)
// string "u"
o = append(o, 0xa1, 0x75)
o = msgp.AppendTime(o, z.lastUpdate)
// string "lh"
o = append(o, 0xa2, 0x6c, 0x68)
o = msgp.AppendTime(o, z.lastHandout)
// string "u"
o = append(o, 0xa1, 0x75)
o = msgp.AppendTime(o, z.lastUpdate)
// string "b"
o = append(o, 0xa1, 0x62)
o = msgp.AppendString(o, z.bucket)
// string "flt"
o = append(o, 0xa3, 0x66, 0x6c, 0x74)
o = msgp.AppendString(o, z.filter)
// string "id"
o = append(o, 0xa2, 0x69, 0x64)
o = msgp.AppendString(o, z.id)
// string "err"
o = append(o, 0xa3, 0x65, 0x72, 0x72)
o = msgp.AppendString(o, z.error)
// string "root"
o = append(o, 0xa4, 0x72, 0x6f, 0x6f, 0x74)
o = msgp.AppendString(o, z.root)
// string "fnf"
o = append(o, 0xa3, 0x66, 0x6e, 0x66)
o = msgp.AppendBool(o, z.fileNotFound)
// string "stat"
o = append(o, 0xa4, 0x73, 0x74, 0x61, 0x74)
o = msgp.AppendUint8(o, uint8(z.status))
// string "rec"
o = append(o, 0xa3, 0x72, 0x65, 0x63)
o = msgp.AppendBool(o, z.recursive)
// string "v"
o = append(o, 0xa1, 0x76)
o = msgp.AppendUint8(o, z.dataVersion)
@ -317,10 +317,28 @@ func (z *metacache) UnmarshalMsg(bts []byte) (o []byte, err error) {
return
}
switch msgp.UnsafeString(field) {
case "id":
z.id, bts, err = msgp.ReadStringBytes(bts)
case "end":
z.ended, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "id")
err = msgp.WrapError(err, "ended")
return
}
case "st":
z.started, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "started")
return
}
case "lh":
z.lastHandout, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "lastHandout")
return
}
case "u":
z.lastUpdate, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "lastUpdate")
return
}
case "b":
@ -329,22 +347,34 @@ func (z *metacache) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "bucket")
return
}
case "flt":
z.filter, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "filter")
return
}
case "id":
z.id, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "id")
return
}
case "err":
z.error, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "error")
return
}
case "root":
z.root, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "root")
return
}
case "rec":
z.recursive, bts, err = msgp.ReadBoolBytes(bts)
case "fnf":
z.fileNotFound, bts, err = msgp.ReadBoolBytes(bts)
if err != nil {
err = msgp.WrapError(err, "recursive")
return
}
case "flt":
z.filter, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "filter")
err = msgp.WrapError(err, "fileNotFound")
return
}
case "stat":
@ -357,40 +387,10 @@ func (z *metacache) UnmarshalMsg(bts []byte) (o []byte, err error) {
}
z.status = scanStatus(zb0002)
}
case "fnf":
z.fileNotFound, bts, err = msgp.ReadBoolBytes(bts)
case "rec":
z.recursive, bts, err = msgp.ReadBoolBytes(bts)
if err != nil {
err = msgp.WrapError(err, "fileNotFound")
return
}
case "err":
z.error, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "error")
return
}
case "st":
z.started, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "started")
return
}
case "end":
z.ended, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "ended")
return
}
case "u":
z.lastUpdate, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "lastUpdate")
return
}
case "lh":
z.lastHandout, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "lastHandout")
err = msgp.WrapError(err, "recursive")
return
}
case "v":
@ -413,7 +413,7 @@ func (z *metacache) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *metacache) Msgsize() (s int) {
s = 1 + 3 + msgp.StringPrefixSize + len(z.id) + 2 + msgp.StringPrefixSize + len(z.bucket) + 5 + msgp.StringPrefixSize + len(z.root) + 4 + msgp.BoolSize + 4 + msgp.StringPrefixSize + len(z.filter) + 5 + msgp.Uint8Size + 4 + msgp.BoolSize + 4 + msgp.StringPrefixSize + len(z.error) + 3 + msgp.TimeSize + 4 + msgp.TimeSize + 2 + msgp.TimeSize + 3 + msgp.TimeSize + 2 + msgp.Uint8Size
s = 1 + 4 + msgp.TimeSize + 3 + msgp.TimeSize + 3 + msgp.TimeSize + 2 + msgp.TimeSize + 2 + msgp.StringPrefixSize + len(z.bucket) + 4 + msgp.StringPrefixSize + len(z.filter) + 3 + msgp.StringPrefixSize + len(z.id) + 4 + msgp.StringPrefixSize + len(z.error) + 5 + msgp.StringPrefixSize + len(z.root) + 4 + msgp.BoolSize + 5 + msgp.Uint8Size + 4 + msgp.BoolSize + 2 + msgp.Uint8Size
return
}

View file

@ -108,6 +108,10 @@ func (c *minioCollector) Collect(ch chan<- prometheus.Metric) {
}
func nodeHealthMetricsPrometheus(ch chan<- prometheus.Metric) {
if globalIsGateway {
return
}
nodesUp, nodesDown := GetPeerOnlineCount()
ch <- prometheus.MustNewConstMetric(
prometheus.NewDesc(

View file

@ -225,7 +225,7 @@ func (d *naughtyDisk) Delete(ctx context.Context, volume string, path string, re
return d.disk.Delete(ctx, volume, path, recursive)
}
func (d *naughtyDisk) DeleteVersions(ctx context.Context, volume string, versions []FileInfo) []error {
func (d *naughtyDisk) DeleteVersions(ctx context.Context, volume string, versions []FileInfoVersions) []error {
if err := d.calcError(); err != nil {
errs := make([]error, len(versions))
for i := range errs {

View file

@ -291,6 +291,13 @@ func (e MethodNotAllowed) Error() string {
return "Method not allowed: " + e.Bucket + "/" + e.Object
}
// ObjectLocked object is currently WORM protected.
type ObjectLocked GenericError
func (e ObjectLocked) Error() string {
return "Object is WORM protected and cannot be overwritten: " + e.Bucket + "/" + e.Object + "(" + e.VersionID + ")"
}
// ObjectAlreadyExists object already exists.
type ObjectAlreadyExists GenericError
@ -638,10 +645,12 @@ func (e UnsupportedMetadata) Error() string {
}
// BackendDown is returned for network errors or if the gateway's backend is down.
type BackendDown struct{}
type BackendDown struct {
Err string
}
func (e BackendDown) Error() string {
return "Backend down"
return e.Err
}
// isErrBucketNotFound - Check if error type is BucketNotFound.

View file

@ -26,13 +26,18 @@ import (
"github.com/minio/madmin-go"
"github.com/minio/minio-go/v7/pkg/encrypt"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/internal/bucket/replication"
"github.com/minio/pkg/bucket/policy"
"github.com/minio/minio/internal/bucket/replication"
xioutil "github.com/minio/minio/internal/ioutil"
)
// CheckPreconditionFn returns true if precondition check failed.
type CheckPreconditionFn func(o ObjectInfo) bool
// EvalMetadataFn validates input objInfo and returns an updated metadata
type EvalMetadataFn func(o ObjectInfo) error
// GetObjectInfoFn is the signature of GetObjectInfo function.
type GetObjectInfoFn func(ctx context.Context, bucket, object string, opts ObjectOptions) (ObjectInfo, error)
@ -50,6 +55,7 @@ type ObjectOptions struct {
UserDefined map[string]string // only set in case of POST/PUT operations
PartNumber int // only useful in case of GetObject/HeadObject
CheckPrecondFn CheckPreconditionFn // only set during GetObject/HeadObject/CopyObjectPart preconditional valuation
EvalMetadataFn EvalMetadataFn // only set for retention settings, meant to be used only when updating metadata in-place.
DeleteReplication ReplicationState // Represents internal replication state needed for Delete replication
Transition TransitionOptions
Expiration ExpirationOptions
@ -257,6 +263,6 @@ func GetObject(ctx context.Context, api ObjectLayer, bucket, object string, star
}
defer reader.Close()
_, err = io.Copy(writer, reader)
_, err = xioutil.Copy(writer, reader)
return err
}

View file

@ -1378,6 +1378,152 @@ func testListObjectVersions(obj ObjectLayer, instanceType string, t1 TestErrHand
}
}
// Wrapper for calling ListObjects continuation tests for both Erasure multiple disks and single node setup.
func TestListObjectsContinuation(t *testing.T) {
ExecObjectLayerTest(t, testListObjectsContinuation)
}
// Unit test for ListObjects in general.
func testListObjectsContinuation(obj ObjectLayer, instanceType string, t1 TestErrHandler) {
t, _ := t1.(*testing.T)
testBuckets := []string{
// This bucket is used for testing ListObject operations.
"test-bucket-list-object-continuation-1",
"test-bucket-list-object-continuation-2",
}
for _, bucket := range testBuckets {
err := obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
if err != nil {
t.Fatalf("%s : %s", instanceType, err.Error())
}
}
var err error
testObjects := []struct {
parentBucket string
name string
content string
meta map[string]string
}{
{testBuckets[0], "a/1.txt", "contentstring", nil},
{testBuckets[0], "a-1.txt", "contentstring", nil},
{testBuckets[0], "a.txt", "contentstring", nil},
{testBuckets[0], "apache2-doc/1.txt", "contentstring", nil},
{testBuckets[0], "apache2/1.txt", "contentstring", nil},
{testBuckets[0], "apache2/-sub/2.txt", "contentstring", nil},
{testBuckets[1], "azerty/1.txt", "contentstring", nil},
{testBuckets[1], "apache2-doc/1.txt", "contentstring", nil},
{testBuckets[1], "apache2/1.txt", "contentstring", nil},
}
for _, object := range testObjects {
md5Bytes := md5.Sum([]byte(object.content))
_, err = obj.PutObject(context.Background(), object.parentBucket, object.name, mustGetPutObjReader(t, bytes.NewBufferString(object.content),
int64(len(object.content)), hex.EncodeToString(md5Bytes[:]), ""), ObjectOptions{UserDefined: object.meta})
if err != nil {
t.Fatalf("%s : %s", instanceType, err.Error())
}
}
// Formualting the result data set to be expected from ListObjects call inside the tests,
// This will be used in testCases and used for asserting the correctness of ListObjects output in the tests.
resultCases := []ListObjectsInfo{
{
Objects: []ObjectInfo{
{Name: "a-1.txt"},
{Name: "a.txt"},
{Name: "a/1.txt"},
{Name: "apache2-doc/1.txt"},
{Name: "apache2/-sub/2.txt"},
{Name: "apache2/1.txt"},
},
},
{
Objects: []ObjectInfo{
{Name: "apache2-doc/1.txt"},
{Name: "apache2/1.txt"},
},
},
{
Prefixes: []string{"apache2-doc/", "apache2/", "azerty/"},
},
}
testCases := []struct {
// Inputs to ListObjects.
bucketName string
prefix string
delimiter string
page int
// Expected output of ListObjects.
result ListObjectsInfo
}{
{testBuckets[0], "", "", 1, resultCases[0]},
{testBuckets[0], "a", "", 1, resultCases[0]},
{testBuckets[1], "apache", "", 1, resultCases[1]},
{testBuckets[1], "", "/", 1, resultCases[2]},
}
for i, testCase := range testCases {
testCase := testCase
t.Run(fmt.Sprintf("%s-Test%d", instanceType, i+1), func(t *testing.T) {
var foundObjects []ObjectInfo
var foundPrefixes []string
var marker = ""
for {
result, err := obj.ListObjects(context.Background(), testCase.bucketName,
testCase.prefix, marker, testCase.delimiter, testCase.page)
if err != nil {
t.Fatalf("Test %d: %s: Expected to pass, but failed with: <ERROR> %s", i+1, instanceType, err.Error())
}
foundObjects = append(foundObjects, result.Objects...)
foundPrefixes = append(foundPrefixes, result.Prefixes...)
if !result.IsTruncated {
break
}
marker = result.NextMarker
if len(result.Objects) > 0 {
// Discard marker, so it cannot resume listing.
marker = result.Objects[len(result.Objects)-1].Name
}
}
if len(testCase.result.Objects) != len(foundObjects) {
t.Logf("want: %v", objInfoNames(testCase.result.Objects))
t.Logf("got: %v", objInfoNames(foundObjects))
t.Errorf("Test %d: %s: Expected number of objects in the result to be '%d', but found '%d' objects instead",
i+1, instanceType, len(testCase.result.Objects), len(foundObjects))
}
for j := 0; j < len(testCase.result.Objects); j++ {
if j >= len(foundObjects) {
t.Errorf("Test %d: %s: Expected object name to be \"%s\", but not nothing instead", i+1, instanceType, testCase.result.Objects[j].Name)
continue
}
if testCase.result.Objects[j].Name != foundObjects[j].Name {
t.Errorf("Test %d: %s: Expected object name to be \"%s\", but found \"%s\" instead", i+1, instanceType, testCase.result.Objects[j].Name, foundObjects[j].Name)
}
}
if len(testCase.result.Prefixes) != len(foundPrefixes) {
t.Logf("want: %v", testCase.result.Prefixes)
t.Logf("got: %v", foundPrefixes)
t.Errorf("Test %d: %s: Expected number of prefixes in the result to be '%d', but found '%d' prefixes instead",
i+1, instanceType, len(testCase.result.Prefixes), len(foundPrefixes))
}
for j := 0; j < len(testCase.result.Prefixes); j++ {
if j >= len(foundPrefixes) {
t.Errorf("Test %d: %s: Expected prefix name to be \"%s\", but found no result", i+1, instanceType, testCase.result.Prefixes[j])
continue
}
if testCase.result.Prefixes[j] != foundPrefixes[j] {
t.Errorf("Test %d: %s: Expected prefix name to be \"%s\", but found \"%s\" instead", i+1, instanceType, testCase.result.Prefixes[j], foundPrefixes[j])
}
}
})
}
}
// Initialize FS backend for the benchmark.
func initFSObjectsB(disk string, t *testing.B) (obj ObjectLayer) {
var err error

View file

@ -54,7 +54,7 @@ import (
"github.com/minio/minio/internal/handlers"
"github.com/minio/minio/internal/hash"
xhttp "github.com/minio/minio/internal/http"
"github.com/minio/minio/internal/ioutil"
xioutil "github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/s3select"
@ -504,14 +504,14 @@ func (api objectAPIHandlers) getObjectHandler(ctx context.Context, objectAPI Obj
setHeadGetRespHeaders(w, r.Form)
statusCodeWritten := false
httpWriter := ioutil.WriteOnClose(w)
httpWriter := xioutil.WriteOnClose(w)
if rs != nil || opts.PartNumber > 0 {
statusCodeWritten = true
w.WriteHeader(http.StatusPartialContent)
}
// Write object content to response body
if _, err = io.Copy(httpWriter, gr); err != nil {
if _, err = xioutil.Copy(httpWriter, gr); err != nil {
if !httpWriter.HasWritten() && !statusCodeWritten {
// write error response only if no data or headers has been written to client yet
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@ -2283,6 +2283,9 @@ func (api objectAPIHandlers) NewMultipartUploadHandler(w http.ResponseWriter, r
return
}
newMultipartUpload := objectAPI.NewMultipartUpload
if api.CacheAPI() != nil {
newMultipartUpload = api.CacheAPI().NewMultipartUpload
}
uploadID, err := newMultipartUpload(ctx, bucket, object, opts)
if err != nil {
@ -2597,9 +2600,13 @@ func (api objectAPIHandlers) CopyObjectPartHandler(w http.ResponseWriter, r *htt
}
srcInfo.PutObjReader = pReader
copyObjectPart := objectAPI.CopyObjectPart
if api.CacheAPI() != nil {
copyObjectPart = api.CacheAPI().CopyObjectPart
}
// Copy source object to destination, if source and destination
// object is same then only metadata is updated.
partInfo, err := objectAPI.CopyObjectPart(ctx, srcBucket, srcObject, dstBucket, dstObject, uploadID, partID,
partInfo, err := copyObjectPart(ctx, srcBucket, srcObject, dstBucket, dstObject, uploadID, partID,
startOffset, length, srcInfo, srcOpts, dstOpts)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@ -2854,6 +2861,9 @@ func (api objectAPIHandlers) PutObjectPartHandler(w http.ResponseWriter, r *http
}
putObjectPart := objectAPI.PutObjectPart
if api.CacheAPI() != nil {
putObjectPart = api.CacheAPI().PutObjectPart
}
partInfo, err := putObjectPart(ctx, bucket, object, uploadID, partID, pReader, opts)
if err != nil {
@ -2907,6 +2917,9 @@ func (api objectAPIHandlers) AbortMultipartUploadHandler(w http.ResponseWriter,
return
}
abortMultipartUpload := objectAPI.AbortMultipartUpload
if api.CacheAPI() != nil {
abortMultipartUpload = api.CacheAPI().AbortMultipartUpload
}
if s3Error := checkRequestAuthType(ctx, r, policy.AbortMultipartUploadAction, bucket, object); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL)
@ -3191,7 +3204,9 @@ func (api objectAPIHandlers) CompleteMultipartUploadHandler(w http.ResponseWrite
s3MD5 := getCompleteMultipartMD5(originalCompleteParts)
completeMultiPartUpload := objectAPI.CompleteMultipartUpload
if api.CacheAPI() != nil {
completeMultiPartUpload = api.CacheAPI().CompleteMultipartUpload
}
// This code is specifically to handle the requirements for slow
// complete multipart upload operations on FS mode.
writeErrorResponseWithoutXMLHeader := func(ctx context.Context, w http.ResponseWriter, err APIError, reqURL *url.URL) {
@ -3209,7 +3224,6 @@ func (api objectAPIHandlers) CompleteMultipartUploadHandler(w http.ResponseWrite
setCommonHeaders(w)
w.Header().Set(xhttp.ContentType, string(mimeXML))
w.Write(encodedErrorResponse)
w.(http.Flusher).Flush()
}
versioned := globalBucketVersioningSys.Enabled(bucket)
@ -3522,53 +3536,39 @@ func (api objectAPIHandlers) PutObjectLegalHoldHandler(w http.ResponseWriter, r
return
}
getObjectInfo := objectAPI.GetObjectInfo
if api.CacheAPI() != nil {
getObjectInfo = api.CacheAPI().GetObjectInfo
}
opts, err := getOpts(ctx, r, bucket, object)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
objInfo, err := getObjectInfo(ctx, bucket, object, opts)
popts := ObjectOptions{
MTime: opts.MTime,
VersionID: opts.VersionID,
EvalMetadataFn: func(oi ObjectInfo) error {
oi.UserDefined[strings.ToLower(xhttp.AmzObjectLockLegalHold)] = strings.ToUpper(string(legalHold.Status))
oi.UserDefined[ReservedMetadataPrefixLower+ObjectLockLegalHoldTimestamp] = UTCNow().Format(time.RFC3339Nano)
dsc := mustReplicate(ctx, bucket, object, getMustReplicateOptions(oi, replication.MetadataReplicationType, opts))
if dsc.ReplicateAny() {
oi.UserDefined[ReservedMetadataPrefixLower+ReplicationTimestamp] = UTCNow().Format(time.RFC3339Nano)
oi.UserDefined[ReservedMetadataPrefixLower+ReplicationStatus] = dsc.PendingStatus()
}
return nil
},
}
objInfo, err := objectAPI.PutObjectMetadata(ctx, bucket, object, popts)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if objInfo.DeleteMarker {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
objInfo.UserDefined[strings.ToLower(xhttp.AmzObjectLockLegalHold)] = strings.ToUpper(string(legalHold.Status))
objInfo.UserDefined[ReservedMetadataPrefixLower+ObjectLockLegalHoldTimestamp] = UTCNow().Format(time.RFC3339Nano)
dsc := mustReplicate(ctx, bucket, object, getMustReplicateOptions(objInfo, replication.MetadataReplicationType, opts))
if dsc.ReplicateAny() {
objInfo.UserDefined[ReservedMetadataPrefixLower+ReplicationTimestamp] = UTCNow().Format(time.RFC3339Nano)
objInfo.UserDefined[ReservedMetadataPrefixLower+ReplicationStatus] = dsc.PendingStatus()
}
// if version-id is not specified retention is supposed to be set on the latest object.
if opts.VersionID == "" {
opts.VersionID = objInfo.VersionID
}
popts := ObjectOptions{
MTime: opts.MTime,
VersionID: opts.VersionID,
UserDefined: make(map[string]string, len(objInfo.UserDefined)),
}
for k, v := range objInfo.UserDefined {
popts.UserDefined[k] = v
}
if _, err = objectAPI.PutObjectMetadata(ctx, bucket, object, popts); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
if dsc.ReplicateAny() {
scheduleReplication(ctx, objInfo.Clone(), objectAPI, dsc, replication.MetadataReplicationType)
}
writeSuccessResponseHeadersOnly(w)
// Notify object event.
@ -3704,50 +3704,37 @@ func (api objectAPIHandlers) PutObjectRetentionHandler(w http.ResponseWriter, r
return
}
getObjectInfo := objectAPI.GetObjectInfo
if api.CacheAPI() != nil {
getObjectInfo = api.CacheAPI().GetObjectInfo
}
objInfo, s3Err := enforceRetentionBypassForPut(ctx, r, bucket, object, getObjectInfo, objRetention, cred, owner)
if s3Err != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Err), r.URL)
return
}
if objInfo.DeleteMarker {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL)
return
}
if objRetention.Mode.Valid() {
objInfo.UserDefined[strings.ToLower(xhttp.AmzObjectLockMode)] = string(objRetention.Mode)
objInfo.UserDefined[strings.ToLower(xhttp.AmzObjectLockRetainUntilDate)] = objRetention.RetainUntilDate.UTC().Format(time.RFC3339)
} else {
objInfo.UserDefined[strings.ToLower(xhttp.AmzObjectLockMode)] = ""
objInfo.UserDefined[strings.ToLower(xhttp.AmzObjectLockRetainUntilDate)] = ""
}
objInfo.UserDefined[ReservedMetadataPrefixLower+ObjectLockRetentionTimestamp] = UTCNow().Format(time.RFC3339Nano)
dsc := mustReplicate(ctx, bucket, object, getMustReplicateOptions(objInfo, replication.MetadataReplicationType, opts))
if dsc.ReplicateAny() {
objInfo.UserDefined[ReservedMetadataPrefixLower+ReplicationTimestamp] = UTCNow().Format(time.RFC3339Nano)
objInfo.UserDefined[ReservedMetadataPrefixLower+ReplicationStatus] = dsc.PendingStatus()
}
// if version-id is not specified retention is supposed to be set on the latest object.
if opts.VersionID == "" {
opts.VersionID = objInfo.VersionID
}
popts := ObjectOptions{
MTime: opts.MTime,
VersionID: opts.VersionID,
UserDefined: make(map[string]string, len(objInfo.UserDefined)),
MTime: opts.MTime,
VersionID: opts.VersionID,
EvalMetadataFn: func(oi ObjectInfo) error {
if err := enforceRetentionBypassForPut(ctx, r, oi, objRetention, cred, owner); err != nil {
return err
}
if objRetention.Mode.Valid() {
oi.UserDefined[strings.ToLower(xhttp.AmzObjectLockMode)] = string(objRetention.Mode)
oi.UserDefined[strings.ToLower(xhttp.AmzObjectLockRetainUntilDate)] = objRetention.RetainUntilDate.UTC().Format(time.RFC3339)
} else {
oi.UserDefined[strings.ToLower(xhttp.AmzObjectLockMode)] = ""
oi.UserDefined[strings.ToLower(xhttp.AmzObjectLockRetainUntilDate)] = ""
}
oi.UserDefined[ReservedMetadataPrefixLower+ObjectLockRetentionTimestamp] = UTCNow().Format(time.RFC3339Nano)
dsc := mustReplicate(ctx, bucket, object, getMustReplicateOptions(oi, replication.MetadataReplicationType, opts))
if dsc.ReplicateAny() {
oi.UserDefined[ReservedMetadataPrefixLower+ReplicationTimestamp] = UTCNow().Format(time.RFC3339Nano)
oi.UserDefined[ReservedMetadataPrefixLower+ReplicationStatus] = dsc.PendingStatus()
}
return nil
},
}
for k, v := range objInfo.UserDefined {
popts.UserDefined[k] = v
}
if _, err = objectAPI.PutObjectMetadata(ctx, bucket, object, popts); err != nil {
objInfo, err := objectAPI.PutObjectMetadata(ctx, bucket, object, popts)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
return
}
dsc := mustReplicate(ctx, bucket, object, getMustReplicateOptions(objInfo, replication.MetadataReplicationType, opts))
if dsc.ReplicateAny() {
scheduleReplication(ctx, objInfo.Clone(), objectAPI, dsc, replication.MetadataReplicationType)
}

View file

@ -32,7 +32,7 @@ import (
"sync/atomic"
"time"
humanize "github.com/dustin/go-humanize"
"github.com/dustin/go-humanize"
"github.com/google/uuid"
"github.com/gorilla/mux"
"github.com/minio/madmin-go"
@ -55,9 +55,6 @@ func (s *peerRESTServer) GetLocksHandler(w http.ResponseWriter, r *http.Request)
ctx := newContext(r, w, "GetLocks")
logger.LogIf(ctx, gob.NewEncoder(w).Encode(globalLockServer.DupLockMap()))
w.(http.Flusher).Flush()
}
// DeletePolicyHandler - deletes a policy on the server.
@ -84,8 +81,6 @@ func (s *peerRESTServer) DeletePolicyHandler(w http.ResponseWriter, r *http.Requ
s.writeErrorResponse(w, err)
return
}
w.(http.Flusher).Flush()
}
// LoadPolicyHandler - reloads a policy on the server.
@ -112,8 +107,6 @@ func (s *peerRESTServer) LoadPolicyHandler(w http.ResponseWriter, r *http.Reques
s.writeErrorResponse(w, err)
return
}
w.(http.Flusher).Flush()
}
// LoadPolicyMappingHandler - reloads a policy mapping on the server.
@ -141,8 +134,6 @@ func (s *peerRESTServer) LoadPolicyMappingHandler(w http.ResponseWriter, r *http
s.writeErrorResponse(w, err)
return
}
w.(http.Flusher).Flush()
}
// DeleteServiceAccountHandler - deletes a service account on the server.
@ -169,8 +160,6 @@ func (s *peerRESTServer) DeleteServiceAccountHandler(w http.ResponseWriter, r *h
s.writeErrorResponse(w, err)
return
}
w.(http.Flusher).Flush()
}
// LoadServiceAccountHandler - reloads a service account on the server.
@ -197,8 +186,6 @@ func (s *peerRESTServer) LoadServiceAccountHandler(w http.ResponseWriter, r *htt
s.writeErrorResponse(w, err)
return
}
w.(http.Flusher).Flush()
}
// DeleteUserHandler - deletes a user on the server.
@ -225,8 +212,6 @@ func (s *peerRESTServer) DeleteUserHandler(w http.ResponseWriter, r *http.Reques
s.writeErrorResponse(w, err)
return
}
w.(http.Flusher).Flush()
}
// LoadUserHandler - reloads a user on the server.
@ -264,8 +249,6 @@ func (s *peerRESTServer) LoadUserHandler(w http.ResponseWriter, r *http.Request)
s.writeErrorResponse(w, err)
return
}
w.(http.Flusher).Flush()
}
// LoadGroupHandler - reloads group along with members list.
@ -288,8 +271,6 @@ func (s *peerRESTServer) LoadGroupHandler(w http.ResponseWriter, r *http.Request
s.writeErrorResponse(w, err)
return
}
w.(http.Flusher).Flush()
}
// StartProfilingHandler - Issues the start profiling command.
@ -329,8 +310,6 @@ func (s *peerRESTServer) StartProfilingHandler(w http.ResponseWriter, r *http.Re
}
globalProfiler[profiler] = prof
}
w.(http.Flusher).Flush()
}
// DownloadProfilingDataHandler - returns profiled data.
@ -346,8 +325,6 @@ func (s *peerRESTServer) DownloadProfilingDataHandler(w http.ResponseWriter, r *
s.writeErrorResponse(w, err)
return
}
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(profileData))
}
@ -361,7 +338,6 @@ func (s *peerRESTServer) ServerInfoHandler(w http.ResponseWriter, r *http.Reques
ctx := newContext(r, w, "ServerInfo")
info := getLocalServerProperty(globalEndpoints, r)
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
}
@ -395,7 +371,6 @@ func (s *peerRESTServer) NetInfoHandler(w http.ResponseWriter, r *http.Request)
return
}
w.Header().Set("FinalStatus", "Success")
w.(http.Flusher).Flush()
}
func (s *peerRESTServer) DispatchNetInfoHandler(w http.ResponseWriter, r *http.Request) {
@ -411,7 +386,6 @@ func (s *peerRESTServer) DispatchNetInfoHandler(w http.ResponseWriter, r *http.R
done(nil)
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
w.(http.Flusher).Flush()
}
// GetDrivePerfInfosHandler - returns all disk's serial/parallal performance information.
@ -426,7 +400,6 @@ func (s *peerRESTServer) GetDrivePerfInfosHandler(w http.ResponseWriter, r *http
info := getDrivePerfInfos(ctx, r.Host)
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
}
@ -442,7 +415,6 @@ func (s *peerRESTServer) GetCPUsHandler(w http.ResponseWriter, r *http.Request)
info := madmin.GetCPUs(ctx, r.Host)
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
}
@ -458,7 +430,6 @@ func (s *peerRESTServer) GetPartitionsHandler(w http.ResponseWriter, r *http.Req
info := madmin.GetPartitions(ctx, r.Host)
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
}
@ -474,7 +445,6 @@ func (s *peerRESTServer) GetOSInfoHandler(w http.ResponseWriter, r *http.Request
info := madmin.GetOSInfo(ctx, r.Host)
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
}
@ -490,7 +460,6 @@ func (s *peerRESTServer) GetProcInfoHandler(w http.ResponseWriter, r *http.Reque
info := madmin.GetProcInfo(ctx, r.Host)
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
}
@ -506,7 +475,6 @@ func (s *peerRESTServer) GetMemInfoHandler(w http.ResponseWriter, r *http.Reques
info := madmin.GetMemInfo(ctx, r.Host)
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
}
@ -523,7 +491,6 @@ func (s *peerRESTServer) GetSysConfigHandler(w http.ResponseWriter, r *http.Requ
info := madmin.GetSysConfig(ctx, r.Host)
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
}
@ -540,7 +507,6 @@ func (s *peerRESTServer) GetSysServicesHandler(w http.ResponseWriter, r *http.Re
info := madmin.GetSysServices(ctx, r.Host)
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
}
@ -556,7 +522,6 @@ func (s *peerRESTServer) GetSysErrorsHandler(w http.ResponseWriter, r *http.Requ
info := madmin.GetSysErrors(ctx, r.Host)
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(info))
}
@ -618,7 +583,6 @@ func (s *peerRESTServer) GetBucketStatsHandler(w http.ResponseWriter, r *http.Re
bs := BucketStats{
ReplicationStats: globalReplicationStats.Get(bucketName),
}
defer w.(http.Flusher).Flush()
logger.LogIf(r.Context(), msgp.Encode(w, &bs))
}
@ -749,7 +713,6 @@ func (s *peerRESTServer) PutBucketNotificationHandler(w http.ResponseWriter, r *
}
globalNotificationSys.AddRulesMap(bucketName, rulesMap)
w.(http.Flusher).Flush()
}
// Return disk IDs of all the local disks.
@ -810,7 +773,6 @@ func (s *peerRESTServer) GetLocalDiskIDs(w http.ResponseWriter, r *http.Request)
ids := getLocalDiskIDs(z)
logger.LogIf(ctx, gob.NewEncoder(w).Encode(ids))
w.(http.Flusher).Flush()
}
// ServerUpdateHandler - updates the current server.
@ -1060,7 +1022,6 @@ func (s *peerRESTServer) BackgroundHealStatusHandler(w http.ResponseWriter, r *h
return
}
defer w.(http.Flusher).Flush()
logger.LogIf(ctx, gob.NewEncoder(w).Encode(state))
}
@ -1144,7 +1105,6 @@ func (s *peerRESTServer) GetBandwidth(w http.ResponseWriter, r *http.Request) {
s.writeErrorResponse(w, errors.New("Encoding report failed: "+err.Error()))
return
}
w.(http.Flusher).Flush()
}
// GetPeerMetrics gets the metrics to be federated across peers.
@ -1167,7 +1127,6 @@ func (s *peerRESTServer) GetPeerMetrics(w http.ResponseWriter, r *http.Request)
return
}
}
w.(http.Flusher).Flush()
}
// SpeedtestResult return value of the speedtest function
@ -1352,7 +1311,6 @@ func (s *peerRESTServer) SpeedtestHandler(w http.ResponseWriter, r *http.Request
done(nil)
logger.LogIf(r.Context(), gob.NewEncoder(w).Encode(result))
w.(http.Flusher).Flush()
}
// registerPeerRESTHandlers - register peer rest router.

View file

@ -30,7 +30,7 @@ import (
"testing"
"time"
humanize "github.com/dustin/go-humanize"
"github.com/dustin/go-humanize"
)
const (
@ -267,6 +267,16 @@ func testPostPolicyBucketHandler(obj ObjectLayer, instanceType string, t TestErr
dates: []interface{}{curTimePlus5Min.Format(iso8601TimeFormat), curTime.Format(iso8601DateFormat), curTime.Format(yyyymmdd)},
policy: `{"expiration": "%s","conditions":[["eq", "$bucket", "` + bucketName + `"], ["starts-with", "$key", "test/"], ["eq", "$x-amz-algorithm", "AWS4-HMAC-SHA256"], ["eq", "$x-amz-date", "%s"], ["eq", "$x-amz-credential", "` + credentials.AccessKey + `/%s/us-east-1/s3/aws4_request"],["eq", "$x-amz-meta-uuid", "1234"]]}`,
},
// Success case, big body.
{
objectName: "test",
data: bytes.Repeat([]byte("a"), 10<<20),
expectedRespStatus: http.StatusNoContent,
accessKey: credentials.AccessKey,
secretKey: credentials.SecretKey,
dates: []interface{}{curTimePlus5Min.Format(iso8601TimeFormat), curTime.Format(iso8601DateFormat), curTime.Format(yyyymmdd)},
policy: `{"expiration": "%s","conditions":[["eq", "$bucket", "` + bucketName + `"], ["starts-with", "$key", "test/"], ["eq", "$x-amz-algorithm", "AWS4-HMAC-SHA256"], ["eq", "$x-amz-date", "%s"], ["eq", "$x-amz-credential", "` + credentials.AccessKey + `/%s/us-east-1/s3/aws4_request"],["eq", "$x-amz-meta-uuid", "1234"]]}`,
},
// Corrupted Base 64 result
{
objectName: "test",

View file

@ -40,41 +40,24 @@ func registerDistErasureRouters(router *mux.Router, endpointServerPools Endpoint
// List of some generic handlers which are applied for all incoming requests.
var globalHandlers = []mux.MiddlewareFunc{
// filters HTTP headers which are treated as metadata and are reserved
// for internal use only.
filterReservedMetadata,
// Enforce rules specific for TLS requests
setSSETLSHandler,
// Auth handler verifies incoming authorization headers and
// routes them accordingly. Client receives a HTTP error for
// invalid/unsupported signatures.
setAuthHandler,
//
// Validates all incoming requests to have a valid date header.
setTimeValidityHandler,
// Validates if incoming request is for restricted buckets.
setReservedBucketHandler,
setAuthHandler,
// Redirect some pre-defined browser request paths to a static location prefix.
setBrowserRedirectHandler,
// Adds 'crossdomain.xml' policy handler to serve legacy flash clients.
setCrossDomainPolicy,
// Limits all header sizes to a maximum fixed limit
setRequestHeaderSizeLimitHandler,
// Limits all requests size to a maximum fixed limit
setRequestSizeLimitHandler,
// Limits all body and header sizes to a maximum fixed limit
setRequestLimitHandler,
// Network statistics
setHTTPStatsHandler,
// Validate all the incoming requests.
setRequestValidityHandler,
// Forward path style requests to actual host in a bucket federated setup.
setBucketForwardingHandler,
// set HTTP security headers such as Content-Security-Policy.
addSecurityHeaders,
// set x-amz-request-id header.
addCustomHeaders,
// add redirect handler to redirect
// requests when object layer is not
// initialized.
setRedirectHandler,
// Add new handlers here.
}
@ -104,6 +87,10 @@ func configureServerHandler(endpointServerPools EndpointServerPools) (http.Handl
// Add API router
registerAPIRouter(router)
// Enable bucket forwarding handler only if bucket federation is enabled.
if globalDNSConfig != nil && globalBucketFederation {
globalHandlers = append(globalHandlers, setBucketForwardingHandler)
}
router.Use(globalHandlers...)
return router, nil

View file

@ -23,13 +23,13 @@ import (
"errors"
"fmt"
"io"
stdioutil "io/ioutil"
"io/ioutil"
"net/http"
"sort"
"strings"
"github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/ioutil"
xioutil "github.com/minio/minio/internal/ioutil"
"github.com/minio/minio/internal/logger"
"github.com/minio/pkg/bucket/policy"
xnet "github.com/minio/pkg/net"
@ -187,7 +187,7 @@ func (api objectAPIHandlers) getObjectInArchiveFileHandler(ctx context.Context,
return
}
} else {
rc = stdioutil.NopCloser(bytes.NewReader([]byte{}))
rc = ioutil.NopCloser(bytes.NewReader([]byte{}))
}
defer rc.Close()
@ -199,10 +199,10 @@ func (api objectAPIHandlers) getObjectInArchiveFileHandler(ctx context.Context,
setHeadGetRespHeaders(w, r.Form)
httpWriter := ioutil.WriteOnClose(w)
httpWriter := xioutil.WriteOnClose(w)
// Write object content to response body
if _, err = io.Copy(httpWriter, rc); err != nil {
if _, err = xioutil.Copy(httpWriter, rc); err != nil {
if !httpWriter.HasWritten() {
// write error response only if no data or headers has been written to client yet
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL)
@ -324,7 +324,7 @@ func getFilesListFromZIPObject(ctx context.Context, objectAPI ObjectLayer, bucke
if err != nil {
return nil, ObjectInfo{}, err
}
b, err := stdioutil.ReadAll(gr)
b, err := ioutil.ReadAll(gr)
if err != nil {
gr.Close()
return nil, ObjectInfo{}, err

View file

@ -511,7 +511,7 @@ func serverMain(ctx *cli.Context) {
addrs = append(addrs, globalMinioAddr)
}
httpServer := xhttp.NewServer(addrs, criticalErrorHandler{corsHandler(handler)}, getCert)
httpServer := xhttp.NewServer(addrs, setCriticalErrorHandler(corsHandler(handler)), getCert)
httpServer.BaseContext = func(listener net.Listener) context.Context {
return GlobalContext
}
@ -570,7 +570,7 @@ func serverMain(ctx *cli.Context) {
}
// Initialize users credentials and policies in background right after config has initialized.
go globalIAMSys.Init(GlobalContext, newObject, globalEtcdClient)
go globalIAMSys.Init(GlobalContext, newObject, globalEtcdClient, globalRefreshIAMInterval)
initDataScanner(GlobalContext, newObject)

View file

@ -19,6 +19,7 @@ package cmd
import (
"bytes"
"context"
"encoding/xml"
"fmt"
"io"
@ -158,6 +159,27 @@ func (s *TestSuiteCommon) SetUpSuite(c *check) {
s.secretKey = s.testServer.SecretKey
}
func (s *TestSuiteCommon) RestartTestServer(c *check) {
// Shutdown.
s.testServer.cancel()
s.testServer.Server.Close()
s.testServer.Obj.Shutdown(context.Background())
// Restart.
ctx, cancel := context.WithCancel(context.Background())
s.testServer.cancel = cancel
s.testServer = initTestServerWithBackend(ctx, c, s.testServer, s.testServer.Obj, s.testServer.rawDiskPaths)
if s.secure {
s.testServer.Server.StartTLS()
} else {
s.testServer.Server.Start()
}
s.client = s.testServer.Server.Client()
s.endPoint = s.testServer.Server.URL
}
func (s *TestSuiteCommon) TearDownSuite(c *check) {
s.testServer.Stop()
}

View file

@ -37,9 +37,6 @@ const (
// Global service signal channel.
var globalServiceSignalCh chan serviceSignal
// GlobalServiceDoneCh - Global service done channel.
var GlobalServiceDoneCh <-chan struct{}
// GlobalContext context that is canceled when server is requested to shut down.
var GlobalContext context.Context
@ -48,7 +45,6 @@ var cancelGlobalContext context.CancelFunc
func initGlobalContext() {
GlobalContext, cancelGlobalContext = context.WithCancel(context.Background())
GlobalServiceDoneCh = GlobalContext.Done()
globalServiceSignalCh = make(chan serviceSignal)
}

View file

@ -20,7 +20,6 @@ package cmd
import (
"bytes"
"context"
"crypto/rand"
"crypto/tls"
"encoding/base64"
"encoding/json"
@ -355,20 +354,12 @@ func (c *SiteReplicationSys) AddPeerClusters(ctx context.Context, sites []madmin
// Generate a secret key for the service account.
var secretKey string
{
secretKeyBuf := make([]byte, 40)
n, err := rand.Read(secretKeyBuf)
if err == nil && n != 40 {
err = fmt.Errorf("Unable to read 40 random bytes to generate secret key")
_, secretKey, err := auth.GenerateCredentials()
if err != nil {
return madmin.ReplicateAddStatus{}, SRError{
Cause: err,
Code: ErrInternalError,
}
if err != nil {
return madmin.ReplicateAddStatus{}, SRError{
Cause: err,
Code: ErrInternalError,
}
}
secretKey = strings.Replace(string([]byte(base64.StdEncoding.EncodeToString(secretKeyBuf))[:40]),
"/", "+", -1)
}
svcCred, err := globalIAMSys.NewServiceAccount(ctx, sites[selfIdx].AccessKey, nil, newServiceAccountOpts{
@ -1270,9 +1261,7 @@ func (c *SiteReplicationSys) getAdminClient(ctx context.Context, deploymentID st
}
func (c *SiteReplicationSys) getPeerCreds() (*auth.Credentials, error) {
globalIAMSys.store.rlock()
defer globalIAMSys.store.runlock()
creds, ok := globalIAMSys.iamUsersMap[c.state.ServiceAccountAccessKey]
creds, ok := globalIAMSys.store.GetUser(c.state.ServiceAccountAccessKey)
if !ok {
return nil, errors.New("site replication service account not found!")
}

View file

@ -80,20 +80,20 @@ type FilesInfoVersions struct {
}
// FileInfoVersions represent a list of versions for a given file.
//msgp:tuple FileInfoVersions
// The above means that any added/deleted fields are incompatible.
type FileInfoVersions struct {
// Name of the volume.
Volume string
Volume string `msg:"v,omitempty"`
// Name of the file.
Name string
IsEmptyDir bool
Name string `msg:"n,omitempty"`
// Represents the latest mod time of the
// latest version.
LatestModTime time.Time
LatestModTime time.Time `msg:"lm"`
Versions []FileInfo
Versions []FileInfo `msg:"vs"`
}
// findVersionIndex will return the version index where the version
@ -115,69 +115,74 @@ func (f *FileInfoVersions) findVersionIndex(v string) int {
// The above means that any added/deleted fields are incompatible.
type FileInfo struct {
// Name of the volume.
Volume string
Volume string `msg:"v,omitempty"`
// Name of the file.
Name string
Name string `msg:"n,omitempty"`
// Version of the file.
VersionID string
VersionID string `msg:"vid,omitempty"`
// Indicates if the version is the latest
IsLatest bool
IsLatest bool `msg:"is"`
// Deleted is set when this FileInfo represents
// a deleted marker for a versioned bucket.
Deleted bool
Deleted bool `msg:"del"`
// TransitionStatus is set to Pending/Complete for transitioned
// entries based on state of transition
TransitionStatus string
TransitionStatus string `msg:"ts"`
// TransitionedObjName is the object name on the remote tier corresponding
// to object (version) on the source tier.
TransitionedObjName string
TransitionedObjName string `msg:"to"`
// TransitionTier is the storage class label assigned to remote tier.
TransitionTier string
TransitionTier string `msg:"tt"`
// TransitionVersionID stores a version ID of the object associate
// with the remote tier.
TransitionVersionID string
TransitionVersionID string `msg:"tv"`
// ExpireRestored indicates that the restored object is to be expired.
ExpireRestored bool
ExpireRestored bool `msg:"exp"`
// DataDir of the file
DataDir string
DataDir string `msg:"dd"`
// Indicates if this object is still in V1 format.
XLV1 bool
XLV1 bool `msg:"v1"`
// Date and time when the file was last modified, if Deleted
// is 'true' this value represents when while was deleted.
ModTime time.Time
ModTime time.Time `msg:"mt"`
// Total file size.
Size int64
Size int64 `msg:"sz"`
// File mode bits.
Mode uint32
Mode uint32 `msg:"m"`
// File metadata
Metadata map[string]string
Metadata map[string]string `msg:"meta"`
// All the parts per object.
Parts []ObjectPartInfo
Parts []ObjectPartInfo `msg:"parts"`
// Erasure info for all objects.
Erasure ErasureInfo
Erasure ErasureInfo `msg:"ei"`
MarkDeleted bool // mark this version as deleted
ReplicationState ReplicationState // Internal replication state to be passed back in ObjectInfo
MarkDeleted bool `msg:"md"` // mark this version as deleted
ReplicationState ReplicationState `msg:"rs"` // Internal replication state to be passed back in ObjectInfo
Data []byte // optionally carries object data
Data []byte `msg:"d,allownil"` // optionally carries object data
NumVersions int
SuccessorModTime time.Time
NumVersions int `msg:"nv"`
SuccessorModTime time.Time `msg:"smt"`
Fresh bool // indicates this is a first time call to write FileInfo.
Fresh bool `msg:"fr"` // indicates this is a first time call to write FileInfo.
// Position of this version or object in a multi-object delete call,
// no other caller must set this value other than multi-object delete call.
// usage in other calls in undefined please avoid.
Idx int `msg:"i"`
}
// InlineData returns true if object contents are inlined alongside its metadata.

View file

@ -550,8 +550,8 @@ func (z *FileInfo) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err)
return
}
if zb0001 != 24 {
err = msgp.ArrayError{Wanted: 24, Got: zb0001}
if zb0001 != 25 {
err = msgp.ArrayError{Wanted: 25, Got: zb0001}
return
}
z.Volume, err = dc.ReadString()
@ -711,13 +711,18 @@ func (z *FileInfo) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "Fresh")
return
}
z.Idx, err = dc.ReadInt()
if err != nil {
err = msgp.WrapError(err, "Idx")
return
}
return
}
// EncodeMsg implements msgp.Encodable
func (z *FileInfo) EncodeMsg(en *msgp.Writer) (err error) {
// array header, size 24
err = en.Append(0xdc, 0x0, 0x18)
// array header, size 25
err = en.Append(0xdc, 0x0, 0x19)
if err != nil {
return
}
@ -860,14 +865,19 @@ func (z *FileInfo) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "Fresh")
return
}
err = en.WriteInt(z.Idx)
if err != nil {
err = msgp.WrapError(err, "Idx")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *FileInfo) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// array header, size 24
o = append(o, 0xdc, 0x0, 0x18)
// array header, size 25
o = append(o, 0xdc, 0x0, 0x19)
o = msgp.AppendString(o, z.Volume)
o = msgp.AppendString(o, z.Name)
o = msgp.AppendString(o, z.VersionID)
@ -911,6 +921,7 @@ func (z *FileInfo) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.AppendInt(o, z.NumVersions)
o = msgp.AppendTime(o, z.SuccessorModTime)
o = msgp.AppendBool(o, z.Fresh)
o = msgp.AppendInt(o, z.Idx)
return
}
@ -922,8 +933,8 @@ func (z *FileInfo) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err)
return
}
if zb0001 != 24 {
err = msgp.ArrayError{Wanted: 24, Got: zb0001}
if zb0001 != 25 {
err = msgp.ArrayError{Wanted: 25, Got: zb0001}
return
}
z.Volume, bts, err = msgp.ReadStringBytes(bts)
@ -1083,6 +1094,11 @@ func (z *FileInfo) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "Fresh")
return
}
z.Idx, bts, err = msgp.ReadIntBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Idx")
return
}
o = bts
return
}
@ -1100,87 +1116,62 @@ func (z *FileInfo) Msgsize() (s int) {
for za0003 := range z.Parts {
s += z.Parts[za0003].Msgsize()
}
s += z.Erasure.Msgsize() + msgp.BoolSize + z.ReplicationState.Msgsize() + msgp.BytesPrefixSize + len(z.Data) + msgp.IntSize + msgp.TimeSize + msgp.BoolSize
s += z.Erasure.Msgsize() + msgp.BoolSize + z.ReplicationState.Msgsize() + msgp.BytesPrefixSize + len(z.Data) + msgp.IntSize + msgp.TimeSize + msgp.BoolSize + msgp.IntSize
return
}
// DecodeMsg implements msgp.Decodable
func (z *FileInfoVersions) DecodeMsg(dc *msgp.Reader) (err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, err = dc.ReadMapHeader()
zb0001, err = dc.ReadArrayHeader()
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, err = dc.ReadMapKeyPtr()
if zb0001 != 4 {
err = msgp.ArrayError{Wanted: 4, Got: zb0001}
return
}
z.Volume, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Volume")
return
}
z.Name, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Name")
return
}
z.LatestModTime, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "LatestModTime")
return
}
var zb0002 uint32
zb0002, err = dc.ReadArrayHeader()
if err != nil {
err = msgp.WrapError(err, "Versions")
return
}
if cap(z.Versions) >= int(zb0002) {
z.Versions = (z.Versions)[:zb0002]
} else {
z.Versions = make([]FileInfo, zb0002)
}
for za0001 := range z.Versions {
err = z.Versions[za0001].DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err)
err = msgp.WrapError(err, "Versions", za0001)
return
}
switch msgp.UnsafeString(field) {
case "Volume":
z.Volume, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Volume")
return
}
case "Name":
z.Name, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Name")
return
}
case "IsEmptyDir":
z.IsEmptyDir, err = dc.ReadBool()
if err != nil {
err = msgp.WrapError(err, "IsEmptyDir")
return
}
case "LatestModTime":
z.LatestModTime, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "LatestModTime")
return
}
case "Versions":
var zb0002 uint32
zb0002, err = dc.ReadArrayHeader()
if err != nil {
err = msgp.WrapError(err, "Versions")
return
}
if cap(z.Versions) >= int(zb0002) {
z.Versions = (z.Versions)[:zb0002]
} else {
z.Versions = make([]FileInfo, zb0002)
}
for za0001 := range z.Versions {
err = z.Versions[za0001].DecodeMsg(dc)
if err != nil {
err = msgp.WrapError(err, "Versions", za0001)
return
}
}
default:
err = dc.Skip()
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
return
}
// EncodeMsg implements msgp.Encodable
func (z *FileInfoVersions) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 5
// write "Volume"
err = en.Append(0x85, 0xa6, 0x56, 0x6f, 0x6c, 0x75, 0x6d, 0x65)
// array header, size 4
err = en.Append(0x94)
if err != nil {
return
}
@ -1189,41 +1180,16 @@ func (z *FileInfoVersions) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "Volume")
return
}
// write "Name"
err = en.Append(0xa4, 0x4e, 0x61, 0x6d, 0x65)
if err != nil {
return
}
err = en.WriteString(z.Name)
if err != nil {
err = msgp.WrapError(err, "Name")
return
}
// write "IsEmptyDir"
err = en.Append(0xaa, 0x49, 0x73, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x44, 0x69, 0x72)
if err != nil {
return
}
err = en.WriteBool(z.IsEmptyDir)
if err != nil {
err = msgp.WrapError(err, "IsEmptyDir")
return
}
// write "LatestModTime"
err = en.Append(0xad, 0x4c, 0x61, 0x74, 0x65, 0x73, 0x74, 0x4d, 0x6f, 0x64, 0x54, 0x69, 0x6d, 0x65)
if err != nil {
return
}
err = en.WriteTime(z.LatestModTime)
if err != nil {
err = msgp.WrapError(err, "LatestModTime")
return
}
// write "Versions"
err = en.Append(0xa8, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x73)
if err != nil {
return
}
err = en.WriteArrayHeader(uint32(len(z.Versions)))
if err != nil {
err = msgp.WrapError(err, "Versions")
@ -1242,21 +1208,11 @@ func (z *FileInfoVersions) EncodeMsg(en *msgp.Writer) (err error) {
// MarshalMsg implements msgp.Marshaler
func (z *FileInfoVersions) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 5
// string "Volume"
o = append(o, 0x85, 0xa6, 0x56, 0x6f, 0x6c, 0x75, 0x6d, 0x65)
// array header, size 4
o = append(o, 0x94)
o = msgp.AppendString(o, z.Volume)
// string "Name"
o = append(o, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = msgp.AppendString(o, z.Name)
// string "IsEmptyDir"
o = append(o, 0xaa, 0x49, 0x73, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x44, 0x69, 0x72)
o = msgp.AppendBool(o, z.IsEmptyDir)
// string "LatestModTime"
o = append(o, 0xad, 0x4c, 0x61, 0x74, 0x65, 0x73, 0x74, 0x4d, 0x6f, 0x64, 0x54, 0x69, 0x6d, 0x65)
o = msgp.AppendTime(o, z.LatestModTime)
// string "Versions"
o = append(o, 0xa8, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x73)
o = msgp.AppendArrayHeader(o, uint32(len(z.Versions)))
for za0001 := range z.Versions {
o, err = z.Versions[za0001].MarshalMsg(o)
@ -1270,72 +1226,48 @@ func (z *FileInfoVersions) MarshalMsg(b []byte) (o []byte, err error) {
// UnmarshalMsg implements msgp.Unmarshaler
func (z *FileInfoVersions) UnmarshalMsg(bts []byte) (o []byte, err error) {
var field []byte
_ = field
var zb0001 uint32
zb0001, bts, err = msgp.ReadMapHeaderBytes(bts)
zb0001, bts, err = msgp.ReadArrayHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
for zb0001 > 0 {
zb0001--
field, bts, err = msgp.ReadMapKeyZC(bts)
if zb0001 != 4 {
err = msgp.ArrayError{Wanted: 4, Got: zb0001}
return
}
z.Volume, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Volume")
return
}
z.Name, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Name")
return
}
z.LatestModTime, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "LatestModTime")
return
}
var zb0002 uint32
zb0002, bts, err = msgp.ReadArrayHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Versions")
return
}
if cap(z.Versions) >= int(zb0002) {
z.Versions = (z.Versions)[:zb0002]
} else {
z.Versions = make([]FileInfo, zb0002)
}
for za0001 := range z.Versions {
bts, err = z.Versions[za0001].UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err)
err = msgp.WrapError(err, "Versions", za0001)
return
}
switch msgp.UnsafeString(field) {
case "Volume":
z.Volume, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Volume")
return
}
case "Name":
z.Name, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Name")
return
}
case "IsEmptyDir":
z.IsEmptyDir, bts, err = msgp.ReadBoolBytes(bts)
if err != nil {
err = msgp.WrapError(err, "IsEmptyDir")
return
}
case "LatestModTime":
z.LatestModTime, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "LatestModTime")
return
}
case "Versions":
var zb0002 uint32
zb0002, bts, err = msgp.ReadArrayHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Versions")
return
}
if cap(z.Versions) >= int(zb0002) {
z.Versions = (z.Versions)[:zb0002]
} else {
z.Versions = make([]FileInfo, zb0002)
}
for za0001 := range z.Versions {
bts, err = z.Versions[za0001].UnmarshalMsg(bts)
if err != nil {
err = msgp.WrapError(err, "Versions", za0001)
return
}
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
err = msgp.WrapError(err)
return
}
}
}
o = bts
return
@ -1343,7 +1275,7 @@ func (z *FileInfoVersions) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *FileInfoVersions) Msgsize() (s int) {
s = 1 + 7 + msgp.StringPrefixSize + len(z.Volume) + 5 + msgp.StringPrefixSize + len(z.Name) + 11 + msgp.BoolSize + 14 + msgp.TimeSize + 9 + msgp.ArrayHeaderSize
s = 1 + msgp.StringPrefixSize + len(z.Volume) + msgp.StringPrefixSize + len(z.Name) + msgp.TimeSize + msgp.ArrayHeaderSize
for za0001 := range z.Versions {
s += z.Versions[za0001].Msgsize()
}

View file

@ -57,7 +57,7 @@ type StorageAPI interface {
// Metadata operations
DeleteVersion(ctx context.Context, volume, path string, fi FileInfo, forceDelMarker bool) error
DeleteVersions(ctx context.Context, volume string, versions []FileInfo) []error
DeleteVersions(ctx context.Context, volume string, versions []FileInfoVersions) []error
WriteMetadata(ctx context.Context, volume, path string, fi FileInfo) error
UpdateMetadata(ctx context.Context, volume, path string, fi FileInfo) error
ReadVersion(ctx context.Context, volume, path, versionID string, readData bool) (FileInfo, error)

View file

@ -590,7 +590,7 @@ func (client *storageRESTClient) Delete(ctx context.Context, volume string, path
}
// DeleteVersions - deletes list of specified versions if present
func (client *storageRESTClient) DeleteVersions(ctx context.Context, volume string, versions []FileInfo) (errs []error) {
func (client *storageRESTClient) DeleteVersions(ctx context.Context, volume string, versions []FileInfoVersions) (errs []error) {
if len(versions) == 0 {
return errs
}

View file

@ -18,7 +18,7 @@
package cmd
const (
storageRESTVersion = "v40" // Add ReplicationState field
storageRESTVersion = "v41" // Optimized DeleteVersions API
storageRESTVersionPrefix = SlashSeparator + storageRESTVersion
storageRESTPrefix = minioReservedBucketPath + "/storage"
)

View file

@ -30,6 +30,7 @@ import (
"net/http"
"os/user"
"path"
"runtime/debug"
"strconv"
"strings"
"sync"
@ -37,10 +38,11 @@ import (
"github.com/tinylib/msgp/msgp"
jwtreq "github.com/golang-jwt/jwt/request"
jwtreq "github.com/golang-jwt/jwt/v4/request"
"github.com/gorilla/mux"
"github.com/minio/minio/internal/config"
xhttp "github.com/minio/minio/internal/http"
xioutil "github.com/minio/minio/internal/ioutil"
xjwt "github.com/minio/minio/internal/jwt"
"github.com/minio/minio/internal/logger"
xnet "github.com/minio/pkg/net"
@ -60,7 +62,6 @@ func (s *storageRESTServer) writeErrorResponse(w http.ResponseWriter, err error)
w.WriteHeader(http.StatusForbidden)
}
w.Write([]byte(err.Error()))
w.(http.Flusher).Flush()
}
// DefaultSkewTime - skew time is 15 minutes between minio peers.
@ -156,7 +157,6 @@ func (s *storageRESTServer) DiskInfoHandler(w http.ResponseWriter, r *http.Reque
if err != nil {
info.Error = err.Error()
}
defer w.(http.Flusher).Flush()
logger.LogIf(r.Context(), msgp.Encode(w, &info))
}
@ -178,6 +178,12 @@ func (s *storageRESTServer) NSScannerHandler(w http.ResponseWriter, r *http.Requ
ctx, cancel := context.WithCancel(r.Context())
defer cancel()
resp := streamHTTPResponse(w)
defer func() {
if r := recover(); r != nil {
debug.PrintStack()
resp.CloseWithError(fmt.Errorf("panic: %v", r))
}
}()
respW := msgp.NewWriter(resp)
// Collect updates, stream them before the full cache is sent.
@ -255,7 +261,6 @@ func (s *storageRESTServer) ListVolsHandler(w http.ResponseWriter, r *http.Reque
s.writeErrorResponse(w, err)
return
}
defer w.(http.Flusher).Flush()
logger.LogIf(r.Context(), msgp.Encode(w, VolsInfo(infos)))
}
@ -271,7 +276,6 @@ func (s *storageRESTServer) StatVolHandler(w http.ResponseWriter, r *http.Reques
s.writeErrorResponse(w, err)
return
}
defer w.(http.Flusher).Flush()
logger.LogIf(r.Context(), msgp.Encode(w, &info))
}
@ -501,9 +505,10 @@ func (s *storageRESTServer) ReadAllHandler(w http.ResponseWriter, r *http.Reques
s.writeErrorResponse(w, err)
return
}
// Reuse after return.
defer metaDataPoolPut(buf)
w.Header().Set(xhttp.ContentLength, strconv.Itoa(len(buf)))
w.Write(buf)
w.(http.Flusher).Flush()
}
// ReadFileHandler - read section of a file.
@ -540,6 +545,7 @@ func (s *storageRESTServer) ReadFileHandler(w http.ResponseWriter, r *http.Reque
verifier = NewBitrotVerifier(BitrotAlgorithmFromString(vars[storageRESTBitrotAlgo]), hash)
}
buf := make([]byte, length)
defer metaDataPoolPut(buf) // Reuse if we can.
_, err = s.storage.ReadFile(r.Context(), volume, filePath, int64(offset), buf, verifier)
if err != nil {
s.writeErrorResponse(w, err)
@ -547,7 +553,6 @@ func (s *storageRESTServer) ReadFileHandler(w http.ResponseWriter, r *http.Reque
}
w.Header().Set(xhttp.ContentLength, strconv.Itoa(len(buf)))
w.Write(buf)
w.(http.Flusher).Flush()
}
// ReadFileHandler - read section of a file.
@ -577,13 +582,12 @@ func (s *storageRESTServer) ReadFileStreamHandler(w http.ResponseWriter, r *http
defer rc.Close()
w.Header().Set(xhttp.ContentLength, strconv.Itoa(length))
if _, err = io.Copy(w, rc); err != nil {
if _, err = xioutil.Copy(w, rc); err != nil {
if !xnet.IsNetworkOrHostDown(err, true) { // do not need to log disconnected clients
logger.LogIf(r.Context(), err)
}
return
}
w.(http.Flusher).Flush()
}
// ListDirHandler - list a directory.
@ -606,7 +610,6 @@ func (s *storageRESTServer) ListDirHandler(w http.ResponseWriter, r *http.Reques
return
}
gob.NewEncoder(w).Encode(&entries)
w.(http.Flusher).Flush()
}
// DeleteFileHandler - delete a file.
@ -648,7 +651,7 @@ func (s *storageRESTServer) DeleteVersionsHandler(w http.ResponseWriter, r *http
return
}
versions := make([]FileInfo, totalVersions)
versions := make([]FileInfoVersions, totalVersions)
decoder := msgp.NewReader(r.Body)
for i := 0; i < totalVersions; i++ {
dst := &versions[i]
@ -671,7 +674,6 @@ func (s *storageRESTServer) DeleteVersionsHandler(w http.ResponseWriter, r *http
}
}
encoder.Encode(dErrsResp)
w.(http.Flusher).Flush()
}
// RenameDataHandler - renames a meta object and data dir to destination.
@ -960,13 +962,22 @@ func streamHTTPResponse(w http.ResponseWriter) *httpStreamResponse {
return &h
}
var poolBuf8k = sync.Pool{
New: func() interface{} {
b := make([]byte, 8192)
return &b
},
}
// waitForHTTPStream will wait for responses where
// streamHTTPResponse has been used.
// The returned reader contains the payload and must be closed if no error is returned.
func waitForHTTPStream(respBody io.ReadCloser, w io.Writer) error {
var tmp [1]byte
// 8K copy buffer, reused for less allocs...
var buf [8 << 10]byte
bufp := poolBuf8k.Get().(*[]byte)
buf := *bufp
defer poolBuf8k.Put(bufp)
for {
_, err := io.ReadFull(respBody, tmp[:])
if err != nil {
@ -976,7 +987,7 @@ func waitForHTTPStream(respBody io.ReadCloser, w io.Writer) error {
switch tmp[0] {
case 0:
// 0 is unbuffered, copy the rest.
_, err := io.CopyBuffer(w, respBody, buf[:])
_, err := io.CopyBuffer(w, respBody, buf)
if err == io.EOF {
return nil
}
@ -995,7 +1006,7 @@ func waitForHTTPStream(respBody io.ReadCloser, w io.Writer) error {
return err
}
length := binary.LittleEndian.Uint32(tmp[:])
_, err = io.CopyBuffer(w, io.LimitReader(respBody, int64(length)), buf[:])
_, err = io.CopyBuffer(w, io.LimitReader(respBody, int64(length)), buf)
if err != nil {
return err
}
@ -1043,7 +1054,6 @@ func (s *storageRESTServer) VerifyFileHandler(w http.ResponseWriter, r *http.Req
vresp.Err = StorageErr(err.Error())
}
encoder.Encode(vresp)
w.(http.Flusher).Flush()
}
// A single function to write certain errors to be fatal

View file

@ -234,14 +234,16 @@ func (sts *stsAPIHandlers) AssumeRole(w http.ResponseWriter, r *http.Request) {
}
}
var err error
m := make(map[string]interface{})
m[expClaim], err = openid.GetDefaultExpiration(r.Form.Get(stsDurationSeconds))
duration, err := openid.GetDefaultExpiration(r.Form.Get(stsDurationSeconds))
if err != nil {
writeSTSErrorResponse(ctx, w, true, ErrSTSInvalidParameterValue, err)
return
}
m := map[string]interface{}{
expClaim: UTCNow().Add(duration).Unix(),
}
policies, err := globalIAMSys.PolicyDBGet(user.AccessKey, false)
if err != nil {
writeSTSErrorResponse(ctx, w, true, ErrSTSInvalidParameterValue, err)
@ -742,6 +744,31 @@ func (sts *stsAPIHandlers) AssumeRoleWithCertificate(w http.ResponseWriter, r *h
writeSTSErrorResponse(ctx, w, true, ErrSTSInvalidClientCertificate, err)
return
}
} else {
// Technically, there is no security argument for verifying the key usage
// when we don't verify that the certificate has been issued by a trusted CA.
// Any client can create a certificate with arbitrary key usage settings.
//
// However, this check ensures that a certificate with an invalid key usage
// gets rejected even when we skip certificate verification. This helps
// clients detect malformed certificates during testing instead of e.g.
// a self-signed certificate that works while a comparable certificate
// issued by a trusted CA fails due to the MinIO server being less strict
// w.r.t. key usage verification.
//
// Basically, MinIO is more consistent (from a client perspective) when
// we verify the key usage all the time.
var validKeyUsage bool
for _, usage := range certificate.ExtKeyUsage {
if usage == x509.ExtKeyUsageAny || usage == x509.ExtKeyUsageClientAuth {
validKeyUsage = true
break
}
}
if !validKeyUsage {
writeSTSErrorResponse(ctx, w, true, ErrSTSMissingParameter, errors.New("certificate is not valid for client authentication"))
return
}
}
// We map the X.509 subject common name to the policy. So, a client
@ -773,7 +800,7 @@ func (sts *stsAPIHandlers) AssumeRoleWithCertificate(w http.ResponseWriter, r *h
parentUser := "tls:" + certificate.Subject.CommonName
tmpCredentials, err := auth.GetNewCredentialsWithMetadata(map[string]interface{}{
expClaim: time.Now().UTC().Add(expiry).Unix(),
expClaim: UTCNow().Add(expiry).Unix(),
parentClaim: parentUser,
subClaim: certificate.Subject.CommonName,
audClaim: certificate.Subject.Organization,

340
cmd/sts-handlers_test.go Normal file
View file

@ -0,0 +1,340 @@
// Copyright (c) 2015-2021 MinIO, Inc.
//
// This file is part of MinIO Object Storage stack
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"context"
"fmt"
"os"
"strings"
"testing"
"github.com/minio/madmin-go"
minio "github.com/minio/minio-go/v7"
cr "github.com/minio/minio-go/v7/pkg/credentials"
)
func runAllIAMSTSTests(suite *TestSuiteIAM, c *check) {
suite.SetUpSuite(c)
suite.TestSTS(c)
suite.TearDownSuite(c)
}
func TestIAMInternalIDPSTSServerSuite(t *testing.T) {
baseTestCases := []TestSuiteCommon{
// Init and run test on FS backend with signature v4.
{serverType: "FS", signer: signerV4},
// Init and run test on FS backend, with tls enabled.
{serverType: "FS", signer: signerV4, secure: true},
// Init and run test on Erasure backend.
{serverType: "Erasure", signer: signerV4},
// Init and run test on ErasureSet backend.
{serverType: "ErasureSet", signer: signerV4},
}
testCases := []*TestSuiteIAM{}
for _, bt := range baseTestCases {
testCases = append(testCases,
newTestSuiteIAM(bt, false),
newTestSuiteIAM(bt, true),
)
}
for i, testCase := range testCases {
etcdStr := ""
if testCase.withEtcdBackend {
etcdStr = " (with etcd backend)"
}
t.Run(
fmt.Sprintf("Test: %d, ServerType: %s%s", i+1, testCase.serverType, etcdStr),
func(t *testing.T) {
runAllIAMSTSTests(testCase, &check{t, testCase.serverType})
},
)
}
}
func (s *TestSuiteIAM) TestSTS(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
bucket := getRandomBucketName()
err := s.client.MakeBucket(ctx, bucket, minio.MakeBucketOptions{})
if err != nil {
c.Fatalf("bucket creat error: %v", err)
}
// Create policy, user and associate policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
}
accessKey, secretKey := mustGenerateCredentials(c)
err = s.adm.SetUser(ctx, accessKey, secretKey, madmin.AccountEnabled)
if err != nil {
c.Fatalf("Unable to set user: %v", err)
}
err = s.adm.SetPolicy(ctx, policy, accessKey, false)
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
}
// confirm that the user is able to access the bucket
uClient := s.getUserClient(c, accessKey, secretKey, "")
c.mustListObjects(ctx, uClient, bucket)
assumeRole := cr.STSAssumeRole{
Client: s.TestSuiteCommon.client,
STSEndpoint: s.endPoint,
Options: cr.STSAssumeRoleOptions{
AccessKey: accessKey,
SecretKey: secretKey,
Location: "",
},
}
value, err := assumeRole.Retrieve()
if err != nil {
c.Fatalf("err calling assumeRole: %v", err)
}
minioClient, err := minio.New(s.endpoint, &minio.Options{
Creds: cr.NewStaticV4(value.AccessKeyID, value.SecretAccessKey, value.SessionToken),
Secure: s.secure,
Transport: s.TestSuiteCommon.client.Transport,
})
if err != nil {
c.Fatalf("Error initializing client: %v", err)
}
// Validate that the client from sts creds can access the bucket.
c.mustListObjects(ctx, minioClient, bucket)
// Validate that the client cannot remove any objects
err = minioClient.RemoveObject(ctx, bucket, "someobject", minio.RemoveObjectOptions{})
if err.Error() != "Access Denied." {
c.Fatalf("unexpected non-access-denied err: %v", err)
}
}
func (s *TestSuiteIAM) GetLDAPServer(c *check) string {
return os.Getenv(EnvTestLDAPServer)
}
// SetUpLDAP - expects to setup an LDAP test server using the test LDAP
// container and canned data from https://github.com/minio/minio-ldap-testing
func (s *TestSuiteIAM) SetUpLDAP(c *check, serverAddr string) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
configCmds := []string{
"identity_ldap",
fmt.Sprintf("server_addr=%s", serverAddr),
"server_insecure=on",
"lookup_bind_dn=cn=admin,dc=min,dc=io",
"lookup_bind_password=admin",
"user_dn_search_base_dn=dc=min,dc=io",
"user_dn_search_filter=(uid=%s)",
"group_search_base_dn=ou=swengg,dc=min,dc=io",
"group_search_filter=(&(objectclass=groupofnames)(member=%d))",
}
_, err := s.adm.SetConfigKV(ctx, strings.Join(configCmds, " "))
if err != nil {
c.Fatalf("unable to setup LDAP for tests: %v", err)
}
s.RestartIAMSuite(c)
}
const (
EnvTestLDAPServer = "LDAP_TEST_SERVER"
)
func TestIAMWithLDAPServerSuite(t *testing.T) {
baseTestCases := []TestSuiteCommon{
// Init and run test on FS backend with signature v4.
{serverType: "FS", signer: signerV4},
// Init and run test on FS backend, with tls enabled.
{serverType: "FS", signer: signerV4, secure: true},
// Init and run test on Erasure backend.
{serverType: "Erasure", signer: signerV4},
// Init and run test on ErasureSet backend.
{serverType: "ErasureSet", signer: signerV4},
}
testCases := []*TestSuiteIAM{}
for _, bt := range baseTestCases {
testCases = append(testCases,
newTestSuiteIAM(bt, false),
newTestSuiteIAM(bt, true),
)
}
for i, testCase := range testCases {
etcdStr := ""
if testCase.withEtcdBackend {
etcdStr = " (with etcd backend)"
}
t.Run(
fmt.Sprintf("Test: %d, ServerType: %s%s", i+1, testCase.serverType, etcdStr),
func(t *testing.T) {
c := &check{t, testCase.serverType}
suite := testCase
ldapServer := os.Getenv(EnvTestLDAPServer)
if ldapServer == "" {
c.Skip("Skipping LDAP test as no LDAP server is provided.")
}
suite.SetUpSuite(c)
suite.SetUpLDAP(c, ldapServer)
suite.TestLDAPSTS(c)
suite.TearDownSuite(c)
},
)
}
}
func (s *TestSuiteIAM) TestLDAPSTS(c *check) {
ctx, cancel := context.WithTimeout(context.Background(), testDefaultTimeout)
defer cancel()
bucket := getRandomBucketName()
err := s.client.MakeBucket(ctx, bucket, minio.MakeBucketOptions{})
if err != nil {
c.Fatalf("bucket create error: %v", err)
}
// Create policy, user and associate policy
policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::%s/*"
]
}
]
}`, bucket))
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil {
c.Fatalf("policy add error: %v", err)
}
ldapID := cr.LDAPIdentity{
Client: s.TestSuiteCommon.client,
STSEndpoint: s.endPoint,
LDAPUsername: "dillon",
LDAPPassword: "dillon",
}
_, err = ldapID.Retrieve()
if err == nil {
c.Fatalf("Expected to fail to create a user with no associated policy!")
}
userDN := "uid=dillon,ou=people,ou=swengg,dc=min,dc=io"
err = s.adm.SetPolicy(ctx, policy, userDN, false)
if err != nil {
c.Fatalf("Unable to set policy: %v", err)
}
value, err := ldapID.Retrieve()
if err != nil {
c.Fatalf("Expected to generate STS creds, got err: %#v", err)
}
minioClient, err := minio.New(s.endpoint, &minio.Options{
Creds: cr.NewStaticV4(value.AccessKeyID, value.SecretAccessKey, value.SessionToken),
Secure: s.secure,
Transport: s.TestSuiteCommon.client.Transport,
})
if err != nil {
c.Fatalf("Error initializing client: %v", err)
}
// Validate that the client from sts creds can access the bucket.
c.mustListObjects(ctx, minioClient, bucket)
// Validate that the client cannot remove any objects
err = minioClient.RemoveObject(ctx, bucket, "someobject", minio.RemoveObjectOptions{})
if err.Error() != "Access Denied." {
c.Fatalf("unexpected non-access-denied err: %v", err)
}
// Remove the policy assignment on the user DN:
err = s.adm.SetPolicy(ctx, "", userDN, false)
if err != nil {
c.Fatalf("Unable to remove policy setting: %v", err)
}
_, err = ldapID.Retrieve()
if err == nil {
c.Fatalf("Expected to fail to create a user with no associated policy!")
}
// Set policy via group and validate policy assignment.
groupDN := "cn=projectb,ou=groups,ou=swengg,dc=min,dc=io"
err = s.adm.SetPolicy(ctx, policy, groupDN, true)
if err != nil {
c.Fatalf("Unable to set group policy: %v", err)
}
value, err = ldapID.Retrieve()
if err != nil {
c.Fatalf("Expected to generate STS creds, got err: %#v", err)
}
minioClient, err = minio.New(s.endpoint, &minio.Options{
Creds: cr.NewStaticV4(value.AccessKeyID, value.SecretAccessKey, value.SessionToken),
Secure: s.secure,
Transport: s.TestSuiteCommon.client.Transport,
})
if err != nil {
c.Fatalf("Error initializing client: %v", err)
}
// Validate that the client from sts creds can access the bucket.
c.mustListObjects(ctx, minioClient, bucket)
// Validate that the client cannot remove any objects
err = minioClient.RemoveObject(ctx, bucket, "someobject", minio.RemoveObjectOptions{})
c.Assert(err.Error(), "Access Denied.")
}

View file

@ -292,13 +292,14 @@ func isSameType(obj1, obj2 interface{}) bool {
// s := StartTestServer(t,"Erasure")
// defer s.Stop()
type TestServer struct {
Root string
Disks EndpointServerPools
AccessKey string
SecretKey string
Server *httptest.Server
Obj ObjectLayer
cancel context.CancelFunc
Root string
Disks EndpointServerPools
AccessKey string
SecretKey string
Server *httptest.Server
Obj ObjectLayer
cancel context.CancelFunc
rawDiskPaths []string
}
// UnstartedTestServer - Configures a temp FS/Erasure backend,
@ -314,16 +315,22 @@ func UnstartedTestServer(t TestErrHandler, instanceType string) TestServer {
t.Fatal(err)
}
// set the server configuration.
// set new server configuration.
if err = newTestConfig(globalMinioDefaultRegion, objLayer); err != nil {
t.Fatalf("%s", err)
}
return initTestServerWithBackend(ctx, t, testServer, objLayer, disks)
}
// initializes a test server with the given object layer and disks.
func initTestServerWithBackend(ctx context.Context, t TestErrHandler, testServer TestServer, objLayer ObjectLayer, disks []string) TestServer {
// Test Server needs to start before formatting of disks.
// Get credential.
credentials := globalActiveCred
testServer.Obj = objLayer
testServer.rawDiskPaths = disks
testServer.Disks = mustGetPoolEndpoints(disks...)
testServer.AccessKey = credentials.AccessKey
testServer.SecretKey = credentials.SecretKey
@ -334,7 +341,7 @@ func UnstartedTestServer(t TestErrHandler, instanceType string) TestServer {
}
// Run TestServer.
testServer.Server = httptest.NewUnstartedServer(criticalErrorHandler{corsHandler(httpHandler)})
testServer.Server = httptest.NewUnstartedServer(setCriticalErrorHandler(corsHandler(httpHandler)))
globalObjLayerMutex.Lock()
globalObjectAPI = objLayer
@ -348,9 +355,11 @@ func UnstartedTestServer(t TestErrHandler, instanceType string) TestServer {
newAllSubsystems()
globalEtcdClient = nil
initAllSubsystems(ctx, objLayer)
globalIAMSys.Init(ctx, objLayer, globalEtcdClient)
globalIAMSys.Init(ctx, objLayer, globalEtcdClient, 2*time.Second)
return testServer
}

View file

@ -81,6 +81,9 @@ var errGroupNotEmpty = errors.New("Specified group is not empty - cannot remove
// error returned in IAM subsystem when policy doesn't exist.
var errNoSuchPolicy = errors.New("Specified canned policy does not exist")
// error returned when policy to be deleted is in use.
var errPolicyInUse = errors.New("Specified policy is in use and cannot be deleted.")
// error returned in IAM subsystem when an external users systems is configured.
var errIAMActionNotAllowed = errors.New("Specified IAM action is not allowed")

View file

@ -973,73 +973,74 @@ func auditLogInternal(ctx context.Context, bucket, object string, opts AuditLogO
}
// Get the max throughput and iops numbers.
func speedTest(ctx context.Context, throughputSize, iopsSize int, concurrencyStart int, duration time.Duration, autotune bool) (madmin.SpeedTestResult, error) {
var result madmin.SpeedTestResult
func speedTest(ctx context.Context, throughputSize, concurrencyStart int, duration time.Duration, autotune bool) chan madmin.SpeedTestResult {
ch := make(chan madmin.SpeedTestResult, 1)
go func() {
defer close(ch)
objAPI := newObjectLayerFn()
if objAPI == nil {
return result, errServerNotInitialized
}
concurrency := concurrencyStart
throughputHighestGet := uint64(0)
throughputHighestPut := uint64(0)
var throughputHighestPutResults []SpeedtestResult
var throughputHighestGetResults []SpeedtestResult
for {
select {
case <-ctx.Done():
// If the client got disconnected stop the speedtest.
return result, errUnexpected
default:
objAPI := newObjectLayerFn()
if objAPI == nil {
return
}
results := globalNotificationSys.Speedtest(ctx, throughputSize, concurrency, duration)
sort.Slice(results, func(i, j int) bool {
return results[i].Endpoint < results[j].Endpoint
})
totalPut := uint64(0)
totalGet := uint64(0)
for _, result := range results {
totalPut += result.Uploads
totalGet += result.Downloads
}
if totalPut < throughputHighestPut && totalGet < throughputHighestGet {
break
concurrency := concurrencyStart
throughputHighestGet := uint64(0)
throughputHighestPut := uint64(0)
var throughputHighestResults []SpeedtestResult
sendResult := func() {
var result madmin.SpeedTestResult
durationSecs := duration.Seconds()
result.GETStats.ThroughputPerSec = throughputHighestGet / uint64(durationSecs)
result.GETStats.ObjectsPerSec = throughputHighestGet / uint64(throughputSize) / uint64(durationSecs)
result.PUTStats.ThroughputPerSec = throughputHighestPut / uint64(durationSecs)
result.PUTStats.ObjectsPerSec = throughputHighestPut / uint64(throughputSize) / uint64(durationSecs)
for i := 0; i < len(throughputHighestResults); i++ {
errStr := ""
if throughputHighestResults[i].Error != "" {
errStr = throughputHighestResults[i].Error
}
result.PUTStats.Servers = append(result.PUTStats.Servers, madmin.SpeedTestStatServer{
Endpoint: throughputHighestResults[i].Endpoint,
ThroughputPerSec: throughputHighestResults[i].Uploads / uint64(durationSecs),
ObjectsPerSec: throughputHighestResults[i].Uploads / uint64(throughputSize) / uint64(durationSecs),
Err: errStr,
})
result.GETStats.Servers = append(result.GETStats.Servers, madmin.SpeedTestStatServer{
Endpoint: throughputHighestResults[i].Endpoint,
ThroughputPerSec: throughputHighestResults[i].Downloads / uint64(durationSecs),
ObjectsPerSec: throughputHighestResults[i].Downloads / uint64(throughputSize) / uint64(durationSecs),
Err: errStr,
})
}
numDisks := 0
if pools, ok := objAPI.(*erasureServerPools); ok {
for _, set := range pools.serverPools {
numDisks = set.setCount * set.setDriveCount
}
}
result.Disks = numDisks
result.Servers = len(globalNotificationSys.peerClients) + 1
result.Version = Version
result.Size = throughputSize
result.Concurrent = concurrency
ch <- result
}
if totalPut > throughputHighestPut {
throughputHighestPut = totalPut
throughputHighestPutResults = results
}
if totalGet > throughputHighestGet {
throughputHighestGet = totalGet
throughputHighestGetResults = results
}
if !autotune {
break
}
// Try with a higher concurrency to see if we get better throughput
concurrency += (concurrency + 1) / 2
}
concurrency = concurrencyStart
iopsHighestPut := uint64(0)
iopsHighestGet := uint64(0)
var iopsHighestPutResults []SpeedtestResult
var iopsHighestGetResults []SpeedtestResult
if autotune {
for {
select {
case <-ctx.Done():
// If the client got disconnected stop the speedtest.
return result, errUnexpected
return
default:
}
results := globalNotificationSys.Speedtest(ctx, iopsSize, concurrency, duration)
results := globalNotificationSys.Speedtest(ctx, throughputSize, concurrency, duration)
sort.Slice(results, func(i, j int) bool {
return results[i].Endpoint < results[j].Endpoint
})
@ -1049,85 +1050,49 @@ func speedTest(ctx context.Context, throughputSize, iopsSize int, concurrencySta
totalPut += result.Uploads
totalGet += result.Downloads
}
if totalPut < iopsHighestPut && totalGet < iopsHighestGet {
if totalGet < throughputHighestGet {
// Following check is for situations
// when Writes() scale higher than Reads()
// - practically speaking this never happens
// and should never happen - however it has
// been seen recently due to hardware issues
// causes Reads() to go slower than Writes().
//
// Send such results anyways as this shall
// expose a problem underneath.
if totalPut > throughputHighestPut {
throughputHighestResults = results
throughputHighestPut = totalPut
// let the client see lower value as well
throughputHighestGet = totalGet
}
sendResult()
break
}
if totalPut > iopsHighestPut {
iopsHighestPut = totalPut
iopsHighestPutResults = results
doBreak := false
if float64(totalGet-throughputHighestGet)/float64(totalGet) < 0.025 {
doBreak = true
}
if totalGet > iopsHighestGet {
iopsHighestGet = totalGet
iopsHighestGetResults = results
throughputHighestGet = totalGet
throughputHighestResults = results
throughputHighestPut = totalPut
if doBreak {
sendResult()
break
}
if !autotune {
sendResult()
break
}
sendResult()
// Try with a higher concurrency to see if we get better throughput
concurrency += (concurrency + 1) / 2
}
} else {
iopsHighestPut = throughputHighestPut
iopsHighestGet = throughputHighestGet
iopsHighestPutResults = throughputHighestPutResults
iopsHighestGetResults = throughputHighestGetResults
}
if len(throughputHighestPutResults) != len(iopsHighestPutResults) {
return result, errors.New("throughput and iops differ in number of nodes")
}
if len(throughputHighestGetResults) != len(iopsHighestGetResults) {
return result, errors.New("throughput and iops differ in number of nodes")
}
durationSecs := duration.Seconds()
result.PUTStats.ThroughputPerSec = throughputHighestPut / uint64(durationSecs)
result.PUTStats.ObjectsPerSec = iopsHighestPut / uint64(iopsSize) / uint64(durationSecs)
for i := 0; i < len(throughputHighestPutResults); i++ {
errStr := ""
if throughputHighestPutResults[i].Error != "" {
errStr = throughputHighestPutResults[i].Error
}
if iopsHighestPutResults[i].Error != "" {
errStr = iopsHighestPutResults[i].Error
}
result.PUTStats.Servers = append(result.PUTStats.Servers, madmin.SpeedTestStatServer{
Endpoint: throughputHighestPutResults[i].Endpoint,
ThroughputPerSec: throughputHighestPutResults[i].Uploads / uint64(durationSecs),
ObjectsPerSec: iopsHighestPutResults[i].Uploads / uint64(iopsSize) / uint64(durationSecs),
Err: errStr,
})
}
result.GETStats.ThroughputPerSec = throughputHighestGet / uint64(durationSecs)
result.GETStats.ObjectsPerSec = iopsHighestGet / uint64(iopsSize) / uint64(durationSecs)
for i := 0; i < len(throughputHighestGetResults); i++ {
errStr := ""
if throughputHighestGetResults[i].Error != "" {
errStr = throughputHighestGetResults[i].Error
}
if iopsHighestGetResults[i].Error != "" {
errStr = iopsHighestGetResults[i].Error
}
result.GETStats.Servers = append(result.GETStats.Servers, madmin.SpeedTestStatServer{
Endpoint: throughputHighestGetResults[i].Endpoint,
ThroughputPerSec: throughputHighestGetResults[i].Downloads / uint64(durationSecs),
ObjectsPerSec: iopsHighestGetResults[i].Downloads / uint64(iopsSize) / uint64(durationSecs),
Err: errStr,
})
}
numDisks := 0
if pools, ok := objAPI.(*erasureServerPools); ok {
for _, set := range pools.serverPools {
numDisks = set.setCount * set.setDriveCount
}
}
result.Disks = numDisks
result.Servers = len(globalNotificationSys.peerClients) + 1
result.Version = Version
return result, nil
}()
return ch
}

View file

@ -47,7 +47,7 @@ func (az *warmBackendAzure) getDest(object string) string {
}
func (az *warmBackendAzure) tier() azblob.AccessTierType {
for _, t := range azblob.PossibleAccessTierTypeValues() {
if strings.ToLower(az.StorageClass) == strings.ToLower(string(t)) {
if strings.EqualFold(az.StorageClass, string(t)) {
return t
}
}

View file

@ -27,6 +27,8 @@ import (
"google.golang.org/api/googleapi"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
xioutil "github.com/minio/minio/internal/ioutil"
)
type warmBackendGCS struct {
@ -54,7 +56,7 @@ func (gcs *warmBackendGCS) Put(ctx context.Context, key string, data io.Reader,
if gcs.StorageClass != "" {
w.ObjectAttrs.StorageClass = gcs.StorageClass
}
if _, err := io.Copy(w, data); err != nil {
if _, err := xioutil.Copy(w, data); err != nil {
return "", gcsToObjectError(err, gcs.Bucket, key)
}

View file

@ -50,6 +50,10 @@ func checkWarmBackend(ctx context.Context, w WarmBackend) error {
var empty bytes.Reader
rv, err := w.Put(ctx, probeObject, &empty, 0)
if err != nil {
switch err.(type) {
case BackendDown:
return err
}
return tierPermErr{
Op: tierPut,
Err: err,
@ -58,6 +62,10 @@ func checkWarmBackend(ctx context.Context, w WarmBackend) error {
_, err = w.Get(ctx, probeObject, rv, WarmBackendGetOpts{})
if err != nil {
switch err.(type) {
case BackendDown:
return err
}
switch {
case isErrBucketNotFound(err):
return errTierBucketNotFound
@ -72,6 +80,10 @@ func checkWarmBackend(ctx context.Context, w WarmBackend) error {
}
if err = w.Remove(ctx, probeObject, rv); err != nil {
switch err.(type) {
case BackendDown:
return err
}
return tierPermErr{
Op: tierDelete,
Err: err,

View file

@ -421,8 +421,8 @@ func (p *xlStorageDiskIDCheck) Delete(ctx context.Context, volume string, path s
// DeleteVersions deletes slice of versions, it can be same object
// or multiple objects.
func (p *xlStorageDiskIDCheck) DeleteVersions(ctx context.Context, volume string, versions []FileInfo) (errs []error) {
// Mererly for tracing storage
func (p *xlStorageDiskIDCheck) DeleteVersions(ctx context.Context, volume string, versions []FileInfoVersions) (errs []error) {
// Merely for tracing storage
path := ""
if len(versions) > 0 {
path = versions[0].Name

View file

@ -821,13 +821,102 @@ func (s *xlStorage) ListDir(ctx context.Context, volume, dirPath string, count i
return entries, nil
}
func (s *xlStorage) deleteVersions(ctx context.Context, volume, path string, fis ...FileInfo) error {
buf, err := s.ReadAll(ctx, volume, pathJoin(path, xlStorageFormatFile))
if err != nil {
if err != errFileNotFound {
return err
}
metaDataPoolPut(buf) // Never used, return it
buf, err = s.ReadAll(ctx, volume, pathJoin(path, xlStorageFormatFileV1))
if err != nil {
return err
}
}
if len(buf) == 0 {
return errFileNotFound
}
volumeDir, err := s.getVolDir(volume)
if err != nil {
return err
}
if !isXL2V1Format(buf) {
// Delete the meta file, if there are no more versions the
// top level parent is automatically removed.
return s.deleteFile(volumeDir, pathJoin(volumeDir, path), true)
}
var xlMeta xlMetaV2
if err = xlMeta.Load(buf); err != nil {
return err
}
var (
dataDir string
lastVersion bool
)
for _, fi := range fis {
dataDir, lastVersion, err = xlMeta.DeleteVersion(fi)
if err != nil {
return err
}
if dataDir != "" {
versionID := fi.VersionID
if versionID == "" {
versionID = nullVersionID
}
// PR #11758 used DataDir, preserve it
// for users who might have used master
// branch
if !xlMeta.data.remove(versionID, dataDir) {
filePath := pathJoin(volumeDir, path, dataDir)
if err = checkPathLength(filePath); err != nil {
return err
}
if err = s.moveToTrash(filePath, true); err != nil {
if err != errFileNotFound {
return err
}
}
}
}
}
if !lastVersion {
buf, err = xlMeta.AppendTo(metaDataPoolGet())
defer metaDataPoolPut(buf)
if err != nil {
return err
}
return s.WriteAll(ctx, volume, pathJoin(path, xlStorageFormatFile), buf)
}
// Move xl.meta to trash
filePath := pathJoin(volumeDir, path, xlStorageFormatFile)
if err = checkPathLength(filePath); err != nil {
return err
}
err = s.moveToTrash(filePath, false)
if err == nil || err == errFileNotFound {
s.deleteFile(volumeDir, pathJoin(volumeDir, path), false)
}
return err
}
// DeleteVersions deletes slice of versions, it can be same object
// or multiple objects.
func (s *xlStorage) DeleteVersions(ctx context.Context, volume string, versions []FileInfo) []error {
func (s *xlStorage) DeleteVersions(ctx context.Context, volume string, versions []FileInfoVersions) []error {
errs := make([]error, len(versions))
for i, version := range versions {
if err := s.DeleteVersion(ctx, volume, version.Name, version, false); err != nil {
for i, fiv := range versions {
if err := s.deleteVersions(ctx, volume, fiv.Name, fiv.Versions...); err != nil {
errs[i] = err
}
}
@ -859,6 +948,8 @@ func (s *xlStorage) DeleteVersion(ctx context.Context, volume, path string, fi F
// Create a new xl.meta with a delete marker in it
return s.WriteMetadata(ctx, volume, path, fi)
}
metaDataPoolPut(buf) // Never used, return it
buf, err = s.ReadAll(ctx, volume, pathJoin(path, xlStorageFormatFileV1))
if err != nil {
if err == errFileNotFound && fi.VersionID != "" {
@ -1232,12 +1323,29 @@ func (s *xlStorage) readAllData(volumeDir string, filePath string) (buf []byte,
SmallFile: true,
}
defer r.Close()
buf, err = ioutil.ReadAll(r)
// Get size for precise allocation.
stat, err := f.Stat()
if err != nil {
err = osErrToFileErr(err)
buf, err = ioutil.ReadAll(r)
return buf, osErrToFileErr(err)
}
if stat.IsDir() {
return nil, errFileNotFound
}
return buf, err
// Read into appropriate buffer.
sz := stat.Size()
if sz <= metaDataReadDefault {
buf = metaDataPoolGet()
buf = buf[:sz]
} else {
buf = make([]byte, sz)
}
// Read file...
_, err = io.ReadFull(r, buf)
return buf, osErrToFileErr(err)
}
// ReadAll reads from r until an error or EOF and returns the data it read.

View file

@ -443,7 +443,8 @@ func TestXLStorageReadAll(t *testing.T) {
for i, testCase := range testCases {
dataRead, err = xlStorage.ReadAll(context.Background(), testCase.volume, testCase.path)
if err != testCase.err {
t.Fatalf("TestXLStorage %d: Expected err \"%s\", got err \"%s\"", i+1, testCase.err, err)
t.Errorf("TestXLStorage %d: Expected err \"%v\", got err \"%v\"", i+1, testCase.err, err)
continue
}
if err == nil {
if string(dataRead) != string([]byte("Hello, World")) {

View file

@ -1,10 +1,22 @@
#!/usr/bin/env bash
trap 'catch' ERR
trap 'catch $LINENO' ERR
# shellcheck disable=SC2120
catch() {
if [ $# -ne 0 ]; then
echo "error on line $1"
for site in sitea siteb sitec; do
echo "$site server logs ========="
cat "/tmp/${site}_1.log"
echo "==========================="
cat "/tmp/${site}_2.log"
done
fi
echo "Cleaning up instances of MinIO"
pkill minio
pkill -9 minio
rm -rf /tmp/multisitea
rm -rf /tmp/multisiteb
rm -rf /tmp/multisitec
@ -13,47 +25,47 @@ catch() {
catch
set -e
go install -v
export MINIO_BROWSER=off
export MINIO_ROOT_USER="minio"
export MINIO_ROOT_PASSWORD="minio123"
export MINIO_PROMETHEUS_AUTH_TYPE=public
export PATH=${GOPATH}/bin:${PATH}
minio server --address :9001 "http://localhost:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://localhost:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_1.log 2>&1 &
minio server --address :9002 "http://localhost:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://localhost:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_2.log 2>&1 &
minio server --address 127.0.0.1:9001 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_1.log 2>&1 &
minio server --address 127.0.0.1:9002 "http://127.0.0.1:9001/tmp/multisitea/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9002/tmp/multisitea/data/disterasure/xl{5...8}" >/tmp/sitea_2.log 2>&1 &
minio server --address :9003 "http://localhost:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://localhost:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_1.log 2>&1 &
minio server --address :9004 "http://localhost:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://localhost:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
minio server --address 127.0.0.1:9003 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_1.log 2>&1 &
minio server --address 127.0.0.1:9004 "http://127.0.0.1:9003/tmp/multisiteb/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9004/tmp/multisiteb/data/disterasure/xl{5...8}" >/tmp/siteb_2.log 2>&1 &
minio server --address :9005 "http://localhost:9005/tmp/multisitec/data/disterasure/xl{1...4}" \
"http://localhost:9006/tmp/multisitec/data/disterasure/xl{5...8}" >/tmp/sitec_1.log 2>&1 &
minio server --address :9006 "http://localhost:9005/tmp/multisitec/data/disterasure/xl{1...4}" \
"http://localhost:9006/tmp/multisitec/data/disterasure/xl{5...8}" >/tmp/sitec_2.log 2>&1 &
minio server --address 127.0.0.1:9005 "http://127.0.0.1:9005/tmp/multisitec/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9006/tmp/multisitec/data/disterasure/xl{5...8}" >/tmp/sitec_1.log 2>&1 &
minio server --address 127.0.0.1:9006 "http://127.0.0.1:9005/tmp/multisitec/data/disterasure/xl{1...4}" \
"http://127.0.0.1:9006/tmp/multisitec/data/disterasure/xl{5...8}" >/tmp/sitec_2.log 2>&1 &
sleep 30
mc alias set sitea http://localhost:9001 minio minio123
mc alias set sitea http://127.0.0.1:9001 minio minio123
mc mb sitea/bucket
mc version enable sitea/bucket
mc mb -l sitea/olockbucket
mc alias set siteb http://localhost:9004 minio minio123
mc alias set siteb http://127.0.0.1:9004 minio minio123
mc mb siteb/bucket/
mc version enable siteb/bucket/
mc mb -l siteb/olockbucket/
mc alias set sitec http://localhost:9006 minio minio123
mc alias set sitec http://127.0.0.1:9006 minio minio123
mc mb sitec/bucket/
mc version enable sitec/bucket/
mc mb -l sitec/olockbucket
echo "adding replication config for site a -> site b"
remote_arn=$(mc admin bucket remote add sitea/bucket/ \
http://minio:minio123@localhost:9004/bucket \
http://minio:minio123@127.0.0.1:9004/bucket \
--service "replication" --json | jq -r ".RemoteARN")
echo "adding replication rule for a -> b : ${remote_arn}"
sleep 1
@ -64,7 +76,7 @@ sleep 1
echo "adding replication config for site b -> site a"
remote_arn=$(mc admin bucket remote add siteb/bucket/ \
http://minio:minio123@localhost:9001/bucket \
http://minio:minio123@127.0.0.1:9001/bucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for b -> a : ${remote_arn}"
@ -75,7 +87,7 @@ sleep 1
echo "adding replication config for site a -> site c"
remote_arn=$(mc admin bucket remote add sitea/bucket/ \
http://minio:minio123@localhost:9006/bucket \
http://minio:minio123@127.0.0.1:9006/bucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for a -> c : ${remote_arn}"
@ -85,7 +97,7 @@ mc replicate add sitea/bucket/ \
sleep 1
echo "adding replication config for site c -> site a"
remote_arn=$(mc admin bucket remote add sitec/bucket/ \
http://minio:minio123@localhost:9001/bucket \
http://minio:minio123@127.0.0.1:9001/bucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for c -> a : ${remote_arn}"
@ -95,7 +107,7 @@ mc replicate add sitec/bucket/ \
sleep 1
echo "adding replication config for site b -> site c"
remote_arn=$(mc admin bucket remote add siteb/bucket/ \
http://minio:minio123@localhost:9006/bucket \
http://minio:minio123@127.0.0.1:9006/bucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for b -> c : ${remote_arn}"
@ -106,7 +118,7 @@ sleep 1
echo "adding replication config for site c -> site b"
remote_arn=$(mc admin bucket remote add sitec/bucket \
http://minio:minio123@localhost:9004/bucket \
http://minio:minio123@127.0.0.1:9004/bucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for c -> b : ${remote_arn}"
@ -116,7 +128,7 @@ mc replicate add sitec/bucket/ \
sleep 1
echo "adding replication config for olockbucket site a -> site b"
remote_arn=$(mc admin bucket remote add sitea/olockbucket/ \
http://minio:minio123@localhost:9004/olockbucket \
http://minio:minio123@127.0.0.1:9004/olockbucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for olockbucket a -> b : ${remote_arn}"
@ -126,7 +138,7 @@ mc replicate add sitea/olockbucket/ \
sleep 1
echo "adding replication config for site b -> site a"
remote_arn=$(mc admin bucket remote add siteb/olockbucket/ \
http://minio:minio123@localhost:9001/olockbucket \
http://minio:minio123@127.0.0.1:9001/olockbucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for olockbucket b -> a : ${remote_arn}"
@ -136,7 +148,7 @@ mc replicate add siteb/olockbucket/ \
sleep 1
echo "adding replication config for olockbucket site a -> site c"
remote_arn=$(mc admin bucket remote add sitea/olockbucket/ \
http://minio:minio123@localhost:9006/olockbucket \
http://minio:minio123@127.0.0.1:9006/olockbucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for olockbucket a -> c : ${remote_arn}"
@ -146,7 +158,7 @@ mc replicate add sitea/olockbucket/ \
sleep 1
echo "adding replication config for site c -> site a"
remote_arn=$(mc admin bucket remote add sitec/olockbucket/ \
http://minio:minio123@localhost:9001/olockbucket \
http://minio:minio123@127.0.0.1:9001/olockbucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for olockbucket c -> a : ${remote_arn}"
@ -156,7 +168,7 @@ mc replicate add sitec/olockbucket/ \
sleep 1
echo "adding replication config for site b -> site c"
remote_arn=$(mc admin bucket remote add siteb/olockbucket/ \
http://minio:minio123@localhost:9006/olockbucket \
http://minio:minio123@127.0.0.1:9006/olockbucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for olockbucket b -> c : ${remote_arn}"
@ -166,7 +178,7 @@ mc replicate add siteb/olockbucket/ \
sleep 1
echo "adding replication config for site c -> site b"
remote_arn=$(mc admin bucket remote add sitec/olockbucket \
http://minio:minio123@localhost:9004/olockbucket \
http://minio:minio123@127.0.0.1:9004/olockbucket \
--service "replication" --json | jq -r ".RemoteARN")
sleep 1
echo "adding replication rule for olockbucket c -> b : ${remote_arn}"
@ -175,11 +187,33 @@ mc replicate add sitec/olockbucket/ \
--replicate "existing-objects,delete,delete-marker,replica-metadata-sync" --priority 3
sleep 1
echo "Set default governance retention 30d"
mc retention set --default governance 30d sitea/olockbucket
echo "Copying data to source sitea"
echo "Copying data to source sitea/bucket"
mc cp --quiet /etc/hosts sitea/bucket
sleep 1
echo "Copying data to source sitea/olockbucket"
mc cp --quiet /etc/hosts sitea/olockbucket
sleep 1
echo "Verifying the metadata difference between source and target"
diff -pruN <(mc stat --json sitea/bucket/hosts | jq .) <(mc stat --json siteb/bucket/hosts | jq .)
diff -pruN <(mc stat --json sitea/bucket/hosts | jq .) <(mc stat --json sitec/bucket/hosts | jq .)
if diff -pruN <(mc stat --json sitea/bucket/hosts | jq .) <(mc stat --json siteb/bucket/hosts | jq .) | grep -q 'COMPLETED\|REPLICA'; then
echo "verified sitea-> COMPLETED, siteb-> REPLICA"
fi
if diff -pruN <(mc stat --json sitea/bucket/hosts | jq .) <(mc stat --json sitec/bucket/hosts | jq .) | grep -q 'COMPLETED\|REPLICA'; then
echo "verified sitea-> COMPLETED, sitec-> REPLICA"
fi
echo "Verifying the metadata difference between source and target"
if diff -pruN <(mc stat --json sitea/olockbucket/hosts | jq .) <(mc stat --json siteb/olockbucket/hosts | jq .) | grep -q 'COMPLETED\|REPLICA'; then
echo "verified sitea-> COMPLETED, siteb-> REPLICA"
fi
if diff -pruN <(mc stat --json sitea/olockbucket/hosts | jq .) <(mc stat --json sitec/olockbucket/hosts | jq .) | grep -q 'COMPLETED\|REPLICA'; then
echo "verified sitea-> COMPLETED, sitec-> REPLICA"
fi
catch

View file

@ -2,35 +2,41 @@
This document explains some basic assumptions and design approach, limits of the disk caching feature. If you're looking to get started with disk cache, we suggest you go through the [getting started document](https://github.com/minio/minio/blob/master/docs/disk-caching/README.md) first.
## Command-line
## Supported environment variables
```
minio gateway <name> -h
...
...
CACHE:
MINIO_CACHE_DRIVES: List of mounted cache drives or directories delimited by ","
MINIO_CACHE_EXCLUDE: List of cache exclusion patterns delimited by ","
MINIO_CACHE_QUOTA: Maximum permitted usage of the cache in percentage (0-100).
MINIO_CACHE_AFTER: Minimum number of access before caching an object.
MINIO_CACHE_WATERMARK_LOW: % of cache quota at which cache eviction stops
MINIO_CACHE_WATERMARK_HIGH: % of cache quota at which cache eviction starts
MINIO_CACHE_RANGE: set to "on" or "off" caching of independent range requests per object, defaults to "on"
| Environment | Description |
| :---------------------- | ------------------------------------------------------------ |
| MINIO_CACHE_DRIVES | list of mounted cache drives or directories separated by "," |
| MINIO_CACHE_EXCLUDE | list of cache exclusion patterns separated by "," |
| MINIO_CACHE_QUOTA | maximum permitted usage of the cache in percentage (0-100) |
| MINIO_CACHE_AFTER | minimum number of access before caching an object |
| MINIO_CACHE_WATERMARK_LOW | % of cache quota at which cache eviction stops |
| MINIO_CACHE_WATERMARK_HIGH | % of cache quota at which cache eviction starts |
| MINIO_CACHE_RANGE | set to "on" or "off" caching of independent range requests per object, defaults to "on" |
| MINIO_CACHE_COMMIT | set to 'writeback' or 'writethrough' for upload caching |
## Use-cases
...
...
The edge serves as a gateway cache, creating an intermediary between the application and the public cloud. In this scenario, the gateways are backed by servers with a number of either hard drives or flash drives and are deployed in edge data centers. All access to the public cloud goes through these caches (write-through cache), so data is uploaded to the public cloud with strict consistency guarantee. Subsequent reads are served from the cache based on ETAG match or the cache control headers.
Start MinIO gateway to s3 with edge caching enabled on '/mnt/drive1', '/mnt/drive2' and '/mnt/export1 ... /mnt/export24',
exclude all objects under 'mybucket', exclude all objects with '.pdf' as extension. Cache only those objects accessed atleast 3 times. Garbage collection triggers in at high water mark (i.e. cache disk usage reaches 90% of cache quota) or at 72% and evicts oldest objects by access time until low watermark is reached ( 70% of cache quota) , i.e. 63% of disk usage.
$ export MINIO_CACHE_DRIVES="/mnt/drive1,/mnt/drive2,/mnt/export{1..24}"
$ export MINIO_CACHE_EXCLUDE="mybucket/*,*.pdf"
$ export MINIO_CACHE_QUOTA=80
$ export MINIO_CACHE_AFTER=3
$ export MINIO_CACHE_WATERMARK_LOW=70
$ export MINIO_CACHE_WATERMARK_HIGH=90
This architecture reduces costs by decreasing the bandwidth needed to transfer data, improves performance by keeping data cached closer to the application and also reduces the operational cost - the data is still kept in the public cloud, just cached at the edge.
$ minio gateway s3
Following example shows:
- start MinIO gateway to s3 with edge caching enabled on '/mnt/drive1', '/mnt/drive2' and '/mnt/export1 ... /mnt/export24' drives
- exclude all objects under 'mybucket', exclude all objects with '.pdf' as extension.
- cache only those objects accessed atleast 3 times.
- cache garbage collection triggers in at high water mark (i.e. cache disk usage reaches 90% of cache quota) or at 72% and evicts oldest objects by access time until low watermark is reached ( 70% of cache quota) , i.e. 63% of disk usage.
```sh
export MINIO_CACHE_DRIVES="/mnt/drive1,/mnt/drive2,/mnt/export{1..24}"
export MINIO_CACHE_EXCLUDE="mybucket/*,*.pdf"
export MINIO_CACHE_QUOTA=80
export MINIO_CACHE_AFTER=3
export MINIO_CACHE_WATERMARK_LOW=70
export MINIO_CACHE_WATERMARK_HIGH=90
minio gateway s3 https://s3.amazonaws.com
```
### Run MinIO gateway with cache on Docker Container
@ -87,7 +93,13 @@ master key to automatically encrypt all cached content.
Note that cache KMS master key is not recommended for use in production deployments. If the MinIO server/gateway machine is ever compromised, the cache KMS master key must also be treated as compromised.
Support for external KMS to manage cache KMS keys is on the roadmap,and would be ideal for production use cases.
> NOTE: Expiration happens automatically based on the configured interval as explained above, frequently accessed objects stay alive in cache for a significantly longer time.
- `MINIO_CACHE_COMMIT` setting of `writethrough` allows caching of single and multipart uploads synchronously if enabled. By default, however single PUT operations are cached asynchronously on write without any special setting.
- Partially cached stale uploads older than 24 hours are automatically cleaned up.
- Expiration happens automatically based on the configured interval as explained above, frequently accessed objects stay alive in cache for a significantly longer time.
> NOTE: `MINIO_CACHE_COMMIT` also has a value of `writeback` which allows staging single uploads in cache before committing to remote. It is not possible to stage multipart uploads in the cache for consistency reasons - hence, multipart uploads will be cached synchronously even if `writeback` is set.
### Crash Recovery
@ -96,4 +108,4 @@ Upon restart of minio gateway after a running minio process is killed or crashes
## Limits
- Bucket policies are not cached, so anonymous operations are not supported when backend is offline.
- Objects are distributed using deterministic hashing among the list of configured cache drives. If one or more drives go offline, or cache drive configuration is altered in any way, performance may degrade to a linear lookup time depending on the number of disks in cache.
- Objects are distributed using deterministic hashing among the list of configured cache drives. If one or more drives go offline, or cache drive configuration is altered in any way, performance may degrade to O(n) lookup time depending on the number of disks in cache.

View file

@ -137,7 +137,9 @@ With MinIO S3 gateway, you can use MinIO Console to explore AWS S3 based objects
### Known limitations
- Bucket notification APIs are not supported.
- Bucket Notification APIs are not supported.
- Bucket Locking APIs are not supported.
- Versioned buckets on AWS S3 are not supported from gateway layer. Gateway is meant to be used with non-versioned buckets.
## Explore Further

View file

@ -114,6 +114,8 @@ Start (or) Restart Prometheus service by running
Here `prometheus.yml` is the name of configuration file. You can now see MinIO metrics in Prometheus dashboard. By default Prometheus dashboard is accessible at `http://localhost:9090`.
Prometheus sets the `Host` header to `domain:port` as part of HTTP operations against the MinIO metrics endpoint. For MinIO deployments behind a load balancer, reverse proxy, or other control plane (HAProxy, nginx, pfsense, opnsense, etc.), ensure the network service supports routing these requests to the deployment.
### 6. Configure Grafana
After Prometheus is configured, you can use Grafana to visualize MinIO metrics.

View file

@ -27,7 +27,7 @@ For best deployment experience MinIO recommends operating systems RHEL/CentOS 8.
| Maximum number of parts per upload | 10,000 |
| Part size range | 5 MiB to 5 GiB. Last part can be 0 B to 5 GiB |
| Maximum number of parts returned per list parts request | 10000 |
| Maximum number of objects returned per list objects request | 4500 |
| Maximum number of objects returned per list objects request | 1000 |
| Maximum number of multipart uploads returned per list multipart uploads request | 1000 |
| Maximum length for bucket names | 63 |
| Maximum length for object names | 1024 |
@ -50,8 +50,8 @@ We found the following APIs to be redundant or less useful outside of AWS S3. If
- ObjectTorrent
### Object name restrictions on MinIO
- Object names that contain characters `^*|\/&";` are unsupported on Windows platform or any other file systems that do not support filenames with special charaters. **This list is non exhaustive, it depends on the operating system and filesystem under use - please consult your operating system vendor**. MinIO recommends using Linux based deployments for production workloads.
- Object names that contain characters `^*|\/&";` are unsupported on Windows platform or any other file systems that do not support filenames with special charaters. **This list is non exhaustive, it depends on the operating system and filesystem under use - please consult your operating system vendor**. MinIO recommends using Linux based deployments for production workloads.
- Objects should not have conflicting objects as parents, applications using this behavior should change their behavior and use proper unique keys, for example situations such as following conflicting key patterns are not supported.
```

View file

@ -2,7 +2,7 @@ version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: quay.io/minio/minio:RELEASE.2021-10-23T03-28-24Z
image: quay.io/minio/minio:RELEASE.2021-11-09T03-21-45Z
command: server --console-address ":9001" http://minio{1...4}/data{1...2}
expose:
- "9000"

View file

@ -27,7 +27,7 @@ mc admin config set myminio/ api requests_max=1600
mc admin service restart myminio/
```
> NOTE: A zero value of `requests_max` means unlimited and that is the default behavior.
> NOTE: A zero value of `requests_max` means MinIO will automatically calculate requests based on available RAM size and that is the default behavior.
### Configuring connection (wait) deadline
This value works in conjunction with max connection setting, setting this value allows for long waiting requests to quickly time out when there is no slot available to perform the request.

45
go.mod
View file

@ -12,7 +12,7 @@ require (
github.com/bcicen/jstream v1.0.1
github.com/beevik/ntp v0.3.0
github.com/bits-and-blooms/bloom/v3 v3.0.1
github.com/cespare/xxhash/v2 v2.1.1
github.com/cespare/xxhash/v2 v2.1.2
github.com/cheggaaa/pb v1.0.29
github.com/colinmarc/hdfs/v2 v2.2.0
github.com/coredns/coredns v1.4.0
@ -23,34 +23,34 @@ require (
github.com/dustin/go-humanize v1.0.0
github.com/eclipse/paho.mqtt.golang v1.3.0
github.com/elastic/go-elasticsearch/v7 v7.12.0
github.com/fatih/color v1.12.0
github.com/fatih/color v1.13.0
github.com/go-ldap/ldap/v3 v3.2.4
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-openapi/loads v0.20.2
github.com/go-sql-driver/mysql v1.5.0
github.com/golang-jwt/jwt v3.2.1+incompatible
github.com/golang-jwt/jwt/v4 v4.1.0
github.com/gomodule/redigo v2.0.0+incompatible
github.com/google/uuid v1.1.2
github.com/google/uuid v1.3.0
github.com/gorilla/mux v1.8.0
github.com/inconshreveable/mousetrap v1.0.0
github.com/jcmturner/gokrb5/v8 v8.4.2
github.com/json-iterator/go v1.1.12
github.com/klauspost/compress v1.13.5
github.com/klauspost/compress v1.13.6
github.com/klauspost/cpuid/v2 v2.0.9
github.com/klauspost/pgzip v1.2.5
github.com/klauspost/readahead v1.3.1
github.com/klauspost/reedsolomon v1.9.13
github.com/lib/pq v1.9.0
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/miekg/dns v1.1.35
github.com/miekg/dns v1.1.43
github.com/minio/cli v1.22.0
github.com/minio/console v0.11.0
github.com/minio/console v0.12.2
github.com/minio/csvparser v1.0.0
github.com/minio/highwayhash v1.0.2
github.com/minio/kes v0.14.0
github.com/minio/madmin-go v1.1.10
github.com/minio/madmin-go v1.1.11
github.com/minio/minio-go/v7 v7.0.15
github.com/minio/parquet-go v1.0.0
github.com/minio/pkg v1.1.5
github.com/minio/parquet-go v1.1.0
github.com/minio/pkg v1.1.6
github.com/minio/selfupdate v0.3.1
github.com/minio/sha256-simd v1.0.0
github.com/minio/simdjson-go v0.2.1
@ -64,10 +64,10 @@ require (
github.com/nats-io/stan.go v0.8.3
github.com/ncw/directio v1.0.5
github.com/nsqio/go-nsq v1.0.8
github.com/philhofer/fwd v1.1.1
github.com/philhofer/fwd v1.1.2-0.20210722190033-5c56ac6d0bb9
github.com/pierrec/lz4 v2.6.0+incompatible
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.8.0
github.com/prometheus/client_golang v1.11.0
github.com/prometheus/client_model v0.2.0
github.com/prometheus/procfs v0.7.3
github.com/rs/cors v1.7.0
@ -75,21 +75,20 @@ require (
github.com/secure-io/sio-go v0.3.1
github.com/shirou/gopsutil/v3 v3.21.9
github.com/streadway/amqp v1.0.0
github.com/tinylib/msgp v1.1.6
github.com/tinylib/msgp v1.1.7-0.20211026165309-e818a1881b0e
github.com/valyala/bytebufferpool v1.0.0
github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c
github.com/yargevad/filepathx v1.0.0
go.etcd.io/etcd/api/v3 v3.5.0-beta.4
go.etcd.io/etcd/client/v3 v3.5.0-beta.4
go.opencensus.io v0.22.5 // indirect
go.uber.org/atomic v1.7.0
go.uber.org/zap v1.16.1-0.20210329175301-c23abee72d19
go.etcd.io/etcd/api/v3 v3.5.0
go.etcd.io/etcd/client/v3 v3.5.0
go.uber.org/atomic v1.9.0
go.uber.org/zap v1.19.1
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519
golang.org/x/net v0.0.0-20211020060615-d418f374d309 // indirect
golang.org/x/sys v0.0.0-20211020174200-9d6173849985
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba
golang.org/x/tools v0.1.1 // indirect
google.golang.org/api v0.31.0
gopkg.in/ini.v1 v1.63.2 // indirect
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac
google.golang.org/api v0.58.0
gopkg.in/yaml.v2 v2.4.0
)
replace github.com/gomodule/redigo v2.0.0+incompatible => github.com/gomodule/redigo v1.8.5

343
go.sum
View file

@ -17,8 +17,19 @@ cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKV
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
cloud.google.com/go v0.60.0/go.mod h1:yw2G51M9IfRboUH61Us8GqCeF1PzPblB823Mn2q2eAU=
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
cloud.google.com/go v0.65.0 h1:Dg9iHVQfrhq82rUNu9ZxUDrJLaxFUe/HlCVaLyRruq8=
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAVY=
cloud.google.com/go v0.84.0/go.mod h1:RazrYuxIK6Kb7YrzzhPoLmCVzl7Sup4NrbKPg8KHSUM=
cloud.google.com/go v0.87.0/go.mod h1:TpDYlFy7vuLzZMMZ+B6iRiELaY7z/gJPaqbMx6mlWcY=
cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aDQ=
cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI=
cloud.google.com/go v0.94.1 h1:DwuSvDZ1pTYGbXo8yOJevCTr3BoBlE+OVkHAKiYQUXc=
cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
@ -45,8 +56,6 @@ contrib.go.opencensus.io/exporter/stackdriver v0.12.1/go.mod h1:iwB6wGarfphGGe/e
contrib.go.opencensus.io/integrations/ocsql v0.1.4/go.mod h1:8DsSdjz3F+APR+0z0WkU1aRorQCFfRxvqjUUPMbF3fE=
contrib.go.opencensus.io/resource v0.1.1/go.mod h1:F361eGI91LCmW1I/Saf+rX0+OFcigGlFvXwEGEnkRLA=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
git.apache.org/thrift.git v0.13.0 h1:/3bz5WZ+sqYArk7MBBBbDufMxKKOA56/6JO6psDpUDY=
git.apache.org/thrift.git v0.13.0/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
github.com/Azure/azure-amqp-common-go/v2 v2.1.0/go.mod h1:R8rea+gJRuJR6QxTir/XuEd+YuKoUiazDC/N96FiDEU=
github.com/Azure/azure-pipeline-go v0.2.1/go.mod h1:UGSo8XybXnIGZ3epmeBw7Jdz+HiUVpqIlpz/HKHylF4=
github.com/Azure/azure-pipeline-go v0.2.2 h1:6oiIS9yaG6XCCzhgAgKFfIWyo4LLCiDhZot6ltoThhY=
@ -151,6 +160,8 @@ github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.15.0 h1:aGvdaR0v1t9XLgjtBYwxcBvBOTMqClzwE26CHOgjW1Y=
github.com/apache/thrift v0.15.0/go.mod h1:PHK3hniurgQaNMZYaCLEqXKsYK8upmhPbmdP2FXSqgU=
github.com/apex/log v1.1.4/go.mod h1:AlpoD9aScyQfJDVHmLMEcx4oU6LqzkWp4Mg9GdAcEvQ=
github.com/apex/log v1.3.0/go.mod h1:jd8Vpsr46WAe3EZSQ/IUMs2qQD/GOycT5rPWCO1yGcs=
github.com/apex/logs v0.0.4/go.mod h1:XzxuLZ5myVHDy9SAmYpamKKRNApGj54PfYLcFrXqDwo=
@ -187,6 +198,8 @@ github.com/bcicen/jstream v1.0.1 h1:BXY7Cu4rdmc0rhyTVyT3UkxAiX3bnLpKLas9btbH5ck=
github.com/bcicen/jstream v1.0.1/go.mod h1:9ielPxqFry7Y4Tg3j4BfjPocfJ3TbsRtXOAYXYmRuAQ=
github.com/beevik/ntp v0.3.0 h1:xzVrPrE4ziasFXgBVBZJDP0Wg/KpMwk2KHJ4Ba8GrDw=
github.com/beevik/ntp v0.3.0/go.mod h1:hIHWr+l3+/clUnF44zdK+CWW7fO8dR5cIylAQ76NRpg=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
@ -206,6 +219,8 @@ github.com/bombsimon/wsl/v2 v2.0.0/go.mod h1:mf25kr/SqFEPhhcxW1+7pxzGlW+hIl/hYTK
github.com/bombsimon/wsl/v2 v2.2.0/go.mod h1:Azh8c3XGEJl9LyX0/sFC+CKMc7Ssgua0g+6abzXN4Pg=
github.com/bombsimon/wsl/v3 v3.0.0/go.mod h1:st10JtZYLE4D5sC7b8xV4zTKZwAQjCH/Hy2Pm1FNZIc=
github.com/bombsimon/wsl/v3 v3.1.0/go.mod h1:st10JtZYLE4D5sC7b8xV4zTKZwAQjCH/Hy2Pm1FNZIc=
github.com/briandowns/spinner v1.16.0 h1:DFmp6hEaIx2QXXuqSJmtfSBSAjRmpGiKG6ip2Wm/yOs=
github.com/briandowns/spinner v1.16.0/go.mod h1:QOuQk7x+EaDASo80FEXwlwiA+j/PPIcX3FScO+3/ZPQ=
github.com/caarlos0/ctrlc v1.0.0/go.mod h1:CdXpj4rmq0q/1Eb44M9zi2nKB0QraNKuRGYGrrHhcQw=
github.com/campoy/unique v0.0.0-20180121183637-88950e537e7e/go.mod h1:9IOqJGCPMSc6E5ydlp5NIonxObaeu/Iub/X03EKPVYo=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
@ -215,8 +230,9 @@ github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cheggaaa/pb v1.0.29 h1:FckUN5ngEk2LpvuG0fw1GEFx6LtyY2pWI/Z2QgCnEYo=
github.com/cheggaaa/pb v1.0.29/go.mod h1:W40334L7FMC5JKWldsTWbdGjLo0RxUKK73K+TuPxX30=
github.com/cheggaaa/pb/v3 v3.0.5/go.mod h1:X1L61/+36nz9bjIsrDU52qHKOQukUQe2Ge+YvGuquCw=
@ -229,7 +245,10 @@ github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp
github.com/clbanning/x2j v0.0.0-20191024224557-825249438eec/go.mod h1:jMjuTZXRI4dUb/I5gc9Hdhagfvm9+RyrPryS/auMzxE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cockroachdb/apd v1.1.0/go.mod h1:8Sl8LxpKi29FqWXR16WEFZRNSz3SoPzUzeMeY4+DwBQ=
github.com/cockroachdb/cockroach-go/v2 v2.0.3 h1:ZA346ACHIZctef6trOTwBAEvPVm1k0uLm/bb2Atc+S8=
github.com/cockroachdb/cockroach-go/v2 v2.0.3/go.mod h1:hAuDgiVgDVkfirP9JnhXEfcXEPRKBpYdGz+l7mvYSzw=
@ -254,8 +273,9 @@ github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f h1:JOrtw2xFKzlg+cbHpyrpLDmnN1HqhBfnX7WDiW7eG2c=
github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.3.1 h1:7OO2CXWMYNDdaAzP51t4lCCZWwpQHmvPbm9sxWjm3So=
github.com/coreos/go-systemd/v22 v22.3.1/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/go-systemd/v22 v22.3.2 h1:D9/bQk5vlXQFZ6Kwuu6zaiXJ9oTPe68++AzAJc1DzSI=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180108230652-97fdf19511ea/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
@ -275,11 +295,12 @@ github.com/dchest/siphash v1.2.1 h1:4cLinnzVJDKxTCl9B01807Yiy+W7ZzVHj/KIroQRvT4=
github.com/dchest/siphash v1.2.1/go.mod h1:q+IRvb2gOSrUnYoPqHiyHXS0FOBBOdl6tONBlVnOnt4=
github.com/decred/dcrd/chaincfg/chainhash v1.0.2/go.mod h1:BpbrGgrPTr3YJYRN3Bm+D9NuaFd+zGyNeIKgrhCXK60=
github.com/decred/dcrd/crypto/blake256 v1.0.0/go.mod h1:sQl2p6Y26YV+ZOcSTP6thNdn47hh8kt6rqSlvmrXFAc=
github.com/decred/dcrd/dcrec/secp256k1/v3 v3.0.0 h1:sgNeV1VRMDzs6rzyPpxyM0jp317hnwiq58Filgag2xw=
github.com/decred/dcrd/dcrec/secp256k1/v3 v3.0.0/go.mod h1:J70FGZSbzsjecRTiTzER+3f1KZLNaXkuv+yeFTKoxM8=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.0-20210816181553-5444fa50b93d/go.mod h1:tmAIfUFEirG/Y8jhZ9M+h36obRZAk/1fcSpXwAVlfqE=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.0 h1:Fe5DW39aaoS/fqZiYlylEqQWIKznnbatWSHpWdFA3oQ=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.0/go.mod h1:hyedUtir6IdtD/7lIxGeCxkaw7y45JueMRL4DIyJDKs=
github.com/denisenkom/go-mssqldb v0.0.0-20191124224453-732737034ffd/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU=
github.com/devigned/tab v0.1.1/go.mod h1:XG9mPq0dFghrYvoBF3xdRrJzSTX1b7IQrvaL9mzjeJY=
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
@ -323,7 +344,11 @@ github.com/envoyproxy/go-control-plane v0.6.9/go.mod h1:SBwIajubJHhxtWwsL9s8ss4s
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5/go.mod h1:a2zkGnVExMxdzMo3M0Hi/3sEU+cWnZpSni0O6/Yb/P0=
github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
@ -331,8 +356,9 @@ github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fatih/color v1.12.0 h1:mRhaKNwANqRgUBGKmnI5ZxEk7QXmjQeCcuYFMX2bfcc=
github.com/fatih/color v1.12.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM=
github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/fatih/structs v1.1.0 h1:Q7juDM0QtcnhCpeyLGQKyg4TOIghuNXrkL32pHAUMxo=
github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible h1:TcekIExNqud5crz4xD2pavyTgWiPvpYe4Xau31I0PRk=
@ -365,6 +391,7 @@ github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3I
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.10.0/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-ldap/ldap v3.0.2+incompatible h1:kD5HQcAzlQ7yrhfn+h+MSABeAy/jAJhvIJ/QDllP44g=
github.com/go-ldap/ldap v3.0.2+incompatible/go.mod h1:qfd9rJvER9Q0/D/Sqn1DfHRoBp40uXYvFoEVrNEPqRc=
github.com/go-ldap/ldap/v3 v3.2.4 h1:PFavAq2xTgzo/loE8qNXcQaofAaqIpI4WgaLdv+1l3E=
@ -528,8 +555,10 @@ github.com/gobuffalo/packr/v2 v2.0.9/go.mod h1:emmyGweYTm6Kdper+iywB6YK5YzuKchGt
github.com/gobuffalo/packr/v2 v2.2.0/go.mod h1:CaAwI0GPIAv+5wKLtv8Afwl+Cm78K/I/VCm/3ptBN+0=
github.com/gobuffalo/syncx v0.0.0-20190224160051-33c29581e754/go.mod h1:HhnNqWY95UYwwW3uSASeV7vtgYkT2t16hJgV3AEPUpw=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/goccy/go-json v0.4.8 h1:TfwOxfSp8hXH+ivoOk36RyDNmXATUETRdaNWDaZglf8=
github.com/goccy/go-json v0.4.8/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/goccy/go-json v0.7.8/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/goccy/go-json v0.7.9 h1:mSp3uo1tr6MXQTYopSNhHTUnJhd2zQ4Yk+HdJZP+ZRY=
github.com/goccy/go-json v0.7.9/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gofrs/flock v0.0.0-20190320160742-5135e617513b/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
github.com/gofrs/uuid v3.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
@ -541,8 +570,11 @@ github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt v3.2.1+incompatible h1:73Z+4BJcrTC+KczS6WvTPvRGOp1WmfEP4Q1lOd9Z/+c=
github.com/golang-jwt/jwt v3.2.1+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=
github.com/golang-jwt/jwt v3.2.2+incompatible h1:IfV12K8xAKAnZqdXVzCZ+TOjboZ2keLg81eXfW3O+oY=
github.com/golang-jwt/jwt v3.2.2+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=
github.com/golang-jwt/jwt/v4 v4.1.0 h1:XUgk2Ex5veyVFVeLm0xhusUTQybEbexJXrvPNOKkSY0=
github.com/golang-jwt/jwt/v4 v4.1.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
@ -559,6 +591,8 @@ github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@ -576,6 +610,7 @@ github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QD
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
@ -601,8 +636,8 @@ github.com/golangci/prealloc v0.0.0-20180630174525-215b22d4de21/go.mod h1:tf5+bz
github.com/golangci/revgrep v0.0.0-20180526074752-d9c87f5ffaf0/go.mod h1:qOQCunEYvmd/TLamH+7LlVccLvUH5kZNhbCgTHoBbp4=
github.com/golangci/revgrep v0.0.0-20180812185044-276a5c0a1039/go.mod h1:qOQCunEYvmd/TLamH+7LlVccLvUH5kZNhbCgTHoBbp4=
github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4/go.mod h1:Izgrg8RkN3rCIMLGE9CyYmU9pY2Jer6DgANEnZ/L/cQ=
github.com/gomodule/redigo v2.0.0+incompatible h1:K/R+8tc58AaqLkqG2Ol3Qk+DR/TlNuhuh457pBFPtt0=
github.com/gomodule/redigo v2.0.0+incompatible/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4=
github.com/gomodule/redigo v1.8.5 h1:nRAxCa+SVsyjSBrtZmG/cqb6VbTmuRzpg/PoTFlpumc=
github.com/gomodule/redigo v1.8.5/go.mod h1:P9dn9mFrCBvWhGE1wpxx6fgq7BAeLBk+UUUzlpkBYO0=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
@ -613,9 +648,11 @@ github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-containerregistry v0.1.2/go.mod h1:GPivBPgdAyd2SU+vf6EpsgOtWDuPqjW0hJZt4rNdTZ4=
github.com/google/go-github/v28 v28.1.1/go.mod h1:bsqJWQX05omyWVmc00nEUql9mhQyv38lDZ8kPZcQVoM=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
@ -628,8 +665,10 @@ github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible h1:xmapqc1AyLoB+ddYT6r04bD9lIjlOqGaREovi0SzFaE=
github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0 h1:pMen7vLs8nvgEYhywH3KDWJIJTeEr2ULsVWHWYHQyBs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ=
github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
@ -638,20 +677,30 @@ github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hf
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200507031123-427632fa3b1c/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/rpmpack v0.0.0-20191226140753-aa36bfddb3a0/go.mod h1:RaTPr0KUf2K7fnZYLNDrr8rxAamWs3iNywJLtQ2AzBg=
github.com/google/subcommands v1.0.1/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/wire v0.3.0/go.mod h1:i1DMg/Lu8Sz5yYl25iOdmc5CT5qusaa+zmRWs16741s=
github.com/google/wire v0.4.0/go.mod h1:ngWDr9Qvq3yZA10YrxfyGELY/AFWGVpy9c1LTRi1EoU=
github.com/googleapis/gax-go v2.0.2+incompatible h1:silFMLAnr330+NRuag/VjIGF7TLp/LBrV2CJKFLWEww=
github.com/googleapis/gax-go v2.0.2+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
github.com/googleapis/gax-go/v2 v2.1.1 h1:dp3bWCh+PPO1zjRRiCSczJav13sBvG4UhNyVTa1KqdU=
github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/gnostic v0.1.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/gnostic v0.2.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
@ -695,8 +744,9 @@ github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBt
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd/go.mod h1:9bjs9uLqI8l75knNv3lV1kA55veR+WUPSiKIWcQHudI=
@ -711,8 +761,9 @@ github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iP
github.com/hashicorp/go-msgpack v0.5.5/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-msgpack v1.1.5 h1:9byZdVjKTe5mce63pRVNP1L7UAmdHOTEMGehn6KvJWs=
github.com/hashicorp/go-msgpack v1.1.5/go.mod h1:gWVc3sv/wbDmR3rQsj1CAktEZzoz1YNK9NfGLXJ69/4=
github.com/hashicorp/go-multierror v1.0.0 h1:iVjPR7a6H0tWELX5NxNe7bYopibicUzc7uPribsnS6o=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-plugin v1.0.1/go.mod h1:++UyYGoz3o5w9ZzAdZxtQKrWWP+iqPBn3cQptSMzBuY=
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-retryablehttp v0.5.4/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
@ -751,13 +802,13 @@ github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKe
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.7/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.10/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/go-update v0.0.0-20160112193335-8152e7eb6ccf h1:WfD7VjIE6z8dIvMsI4/s+1qr5EL+zoIGev1BQj1eoJ8=
github.com/inconshreveable/go-update v0.0.0-20160112193335-8152e7eb6ccf/go.mod h1:hyb9oH7vZsitZCiBt0ZvifOrB+qc8PS5IiilCIb87rg=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
github.com/jackc/chunkreader v1.0.0 h1:4s39bBR8ByfqH+DKm8rQA3E1LHZWB9XWcrz8fqaZbe0=
@ -866,7 +917,6 @@ github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
@ -884,13 +934,15 @@ github.com/klauspost/compress v1.11.0/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYs
github.com/klauspost/compress v1.11.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.11.12/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.12.2/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
github.com/klauspost/compress v1.13.5 h1:9O69jUPDcsT9fEm74W92rZL9FQY7rCdaXVneq+yyzl4=
github.com/klauspost/compress v1.13.5/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.13.6 h1:P76CopJELS0TiO2mebmnzgWaajssP/EszplttgQxcgc=
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/cpuid v0.0.0-20180405133222-e7e905edc00e/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/cpuid v1.2.3/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/cpuid v1.3.1 h1:5JNjFYYQrZeKRJ0734q51WCEEn2huer72Dc7K+R/b6s=
github.com/klauspost/cpuid v1.3.1/go.mod h1:bYW4mA6ZgKPob1/Dlai2LviZJO7KGI3uoWLd42rAQw4=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.3/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.4/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.6/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
@ -917,21 +969,23 @@ github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kubernetes-csi/csi-lib-utils v0.7.0/go.mod h1:bze+2G9+cmoHxN6+WyG1qT4MDxgZJMLGwc7V4acPNm0=
github.com/lestrrat-go/backoff/v2 v2.0.7 h1:i2SeK33aOFJlUNJZzf2IpXRBvqBBnaGXfY5Xaop/GsE=
github.com/lestrrat-go/backoff/v2 v2.0.7/go.mod h1:rHP/q/r9aT27n24JQLa7JhSQZCKBBOiM/uP402WwN8Y=
github.com/lestrrat-go/backoff/v2 v2.0.8 h1:oNb5E5isby2kiro9AgdHLv5N5tint1AnDVVf2E2un5A=
github.com/lestrrat-go/backoff/v2 v2.0.8/go.mod h1:rHP/q/r9aT27n24JQLa7JhSQZCKBBOiM/uP402WwN8Y=
github.com/lestrrat-go/blackmagic v1.0.0 h1:XzdxDbuQTz0RZZEmdU7cnQxUtFUzgCSPq8RCz4BxIi4=
github.com/lestrrat-go/blackmagic v1.0.0/go.mod h1:TNgH//0vYSs8VXDCfkZLgIrVTTXQELZffUV0tz3MtdQ=
github.com/lestrrat-go/codegen v1.0.0/go.mod h1:JhJw6OQAuPEfVKUCLItpaVLumDGWQznd1VaXrBk9TdM=
github.com/lestrrat-go/codegen v1.0.2/go.mod h1:JhJw6OQAuPEfVKUCLItpaVLumDGWQznd1VaXrBk9TdM=
github.com/lestrrat-go/httpcc v1.0.0 h1:FszVC6cKfDvBKcJv646+lkh4GydQg2Z29scgUfkOpYc=
github.com/lestrrat-go/httpcc v1.0.0/go.mod h1:tGS/u00Vh5N6FHNkExqGGNId8e0Big+++0Gf8MBnAvE=
github.com/lestrrat-go/iter v1.0.1 h1:q8faalr2dY6o8bV45uwrxq12bRa1ezKrB6oM9FUgN4A=
github.com/lestrrat-go/iter v1.0.1/go.mod h1:zIdgO1mRKhn8l9vrZJZz9TUMMFbQbLeTsbqPDrJ/OJc=
github.com/lestrrat-go/jwx v1.2.0 h1:n08WEu8cJy3uzuQ39KWAOIhM4XfeozgaEGA8mTiioZ8=
github.com/lestrrat-go/jwx v1.2.0/go.mod h1:Tg2uP7bpxEHUDtuWjap/PxroJ4okxGzkQznXiG+a5Dc=
github.com/lestrrat-go/jwx v1.2.7 h1:wO7fEc3PW56wpQBMU5CyRkrk4DVsXxCoJg7oIm5HHE4=
github.com/lestrrat-go/jwx v1.2.7/go.mod h1:bw24IXWbavc0R2RsOtpXL7RtMyP589yZ1+L7kd09ZGA=
github.com/lestrrat-go/option v0.0.0-20210103042652-6f1ecfceda35/go.mod h1:5ZHFbivi4xwXxhxY9XHDe2FHo6/Z7WWmtT7T5nBBp3I=
github.com/lestrrat-go/option v1.0.0 h1:WqAWL8kh8VcSoD6xjSH34/1m8yxluXQbDeKNfvFeEO4=
github.com/lestrrat-go/option v1.0.0/go.mod h1:5ZHFbivi4xwXxhxY9XHDe2FHo6/Z7WWmtT7T5nBBp3I=
github.com/lestrrat-go/pdebug/v3 v3.0.1 h1:3G5sX/aw/TbMTtVc9U7IHBWRZtMvwvBziF1e4HoQtv8=
github.com/lestrrat-go/pdebug/v3 v3.0.1/go.mod h1:za+m+Ve24yCxTEhR59N7UlnJomWwCiIqbJRmKeiADU4=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.1.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
@ -967,8 +1021,10 @@ github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcncea
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.8 h1:c1ghPdyEDarC70ftn0y+A/Ee++9zz8ljHG1b13eJ0s8=
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.10 h1:KWqbp83oZ6YOEgIbNW3BM1Jbe2tz4jgmWA9FOuAF8bw=
github.com/mattn/go-colorable v0.1.10/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-ieproxy v0.0.0-20190610004146-91bb50d98149/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
github.com/mattn/go-ieproxy v0.0.0-20190702010315-6dee0af9227d/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
github.com/mattn/go-ieproxy v0.0.1 h1:qiyop7gCflfhwCzGyeT0gro3sF9AIg9HU98JORTkqfI=
@ -982,8 +1038,9 @@ github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2y
github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.13 h1:qdl+GuBjcsKKDco5BsxPJlId98mSWNKqYA+Co0SC1yA=
github.com/mattn/go-isatty v0.0.13/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.7/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
@ -1003,16 +1060,16 @@ github.com/mb0/glob v0.0.0-20160210091149-1eb79d2de6c4 h1:NK3O7S5FRD/wj7ORQ5C3Mx
github.com/mb0/glob v0.0.0-20160210091149-1eb79d2de6c4/go.mod h1:FqD3ES5hx6zpzDainDaHgkTIqrPaI9uX4CVWqYZoQjY=
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.35 h1:oTfOaDH+mZkdcgdIjH6yBajRGtIwcwcaR+rt23ZSrJs=
github.com/miekg/dns v1.1.35/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7xM=
github.com/miekg/dns v1.1.43 h1:JKfpVSCB84vrAmHzyrsxB5NAr5kLoMXZArPSw7Qlgyg=
github.com/miekg/dns v1.1.43/go.mod h1:+evo5L0630/F6ca/Z9+GAqzhjGyn8/c+TBaOyfEl0V4=
github.com/minio/argon2 v1.0.0 h1:cLB/fl0EeBqiDYhsIzIPTdLZhCykRrvdx3Eu3E5oqsE=
github.com/minio/argon2 v1.0.0/go.mod h1:XtOGJ7MjwUJDPtCqqrisx5QwVB/jDx+adQHigJVsQHQ=
github.com/minio/cli v1.22.0 h1:VTQm7lmXm3quxO917X3p+el1l0Ca5X3S4PM2ruUYO68=
github.com/minio/cli v1.22.0/go.mod h1:bYxnK0uS629N3Bq+AOZZ+6lwF77Sodk4+UL9vNuXhOY=
github.com/minio/colorjson v1.0.1 h1:+hvfP8C1iMB95AT+ZFDRE+Knn9QPd9lg0CRJY9DRpos=
github.com/minio/colorjson v1.0.1/go.mod h1:oPM3zQQY8Gz9NGtgvuBEjQ+gPZLKAGc7T+kjMlwtOgs=
github.com/minio/console v0.11.0 h1:iVVptZSCWHKs1pqHAlXMb5JTolKkZ88AyZDXCmedQ+w=
github.com/minio/console v0.11.0/go.mod h1:YSIxgJ2HsR8c277fWn7PF9GhgWtOZrkIJuyZ2TIToug=
github.com/minio/console v0.12.2 h1:0MrD+vknaxqVdcyD6bFyWo8EPfyS14NqaY1DIAVx7iI=
github.com/minio/console v0.12.2/go.mod h1:7mJoMR1Ki176CAsIsF3T2LmgRWzHDVvOfRVjnO9oyAI=
github.com/minio/csvparser v1.0.0 h1:xJEHcYK8ZAjeW4hNV9Zu30u+/2o4UyPnYgyjWp8b7ZU=
github.com/minio/csvparser v1.0.0/go.mod h1:lKXskSLzPgC5WQyzP7maKH7Sl1cqvANXo9YCto8zbtM=
github.com/minio/direct-csi v1.3.5-0.20210601185811-f7776f7961bf h1:wylCc/PdvdTIqYqVNEU9LJAZBanvfGY1TwTnjM3zQaA=
@ -1026,34 +1083,32 @@ github.com/minio/kes v0.11.0/go.mod h1:mTF1Bv8YVEtQqF/B7Felp4tLee44Pp+dgI0rhCvgN
github.com/minio/kes v0.14.0 h1:plCGm4LwR++T1P1sXsJbyFRX54CE1WRuo9PAPj6MC3Q=
github.com/minio/kes v0.14.0/go.mod h1:OUensXz2BpgMfiogslKxv7Anyx/wj+6bFC6qA7BQcfA=
github.com/minio/madmin-go v1.0.12/go.mod h1:BK+z4XRx7Y1v8SFWXsuLNqQqnq5BO/axJ8IDJfgyvfs=
github.com/minio/madmin-go v1.1.6/go.mod h1:vw+c3/u+DeVKqReEavo///Cl2OO8nt5s4ee843hJeLs=
github.com/minio/madmin-go v1.1.9 h1:leOiyGhHL3wbF2FQoI/s/WROEPa3imR3cBP8wg9AVJk=
github.com/minio/madmin-go v1.1.9/go.mod h1:Iu0OnrMWNBYx1lqJTW+BFjBMx0Hi0wjw8VmqhiOs2Jo=
github.com/minio/madmin-go v1.1.10 h1:pfMgXkzdwADnNfVdNMJbwok2fjb2sJ7Q76kDt89RGzE=
github.com/minio/madmin-go v1.1.10/go.mod h1:Iu0OnrMWNBYx1lqJTW+BFjBMx0Hi0wjw8VmqhiOs2Jo=
github.com/minio/mc v0.0.0-20210626002108-cebf3318546f h1:hyFvo5hSFw2K417YvDr/vAKlgCG69uTuhZW/5LNdL0U=
github.com/minio/mc v0.0.0-20210626002108-cebf3318546f/go.mod h1:tuaonkPjVApCXkbtKENHBtsqUf7YTV33qmFrC+Pgp5g=
github.com/minio/madmin-go v1.1.11 h1:X3hRrdX2IO0mdEyb+vq1GB3wBhd6uSHq+XnKdWeATeM=
github.com/minio/madmin-go v1.1.11/go.mod h1:Iu0OnrMWNBYx1lqJTW+BFjBMx0Hi0wjw8VmqhiOs2Jo=
github.com/minio/mc v0.0.0-20211027024940-7866f97ef502 h1:7ip9qTspUniv+WDENgOcfUr95IccxG5aDkBM4Z96kQg=
github.com/minio/mc v0.0.0-20211027024940-7866f97ef502/go.mod h1:vxztwXLB9Gyl/h3Yh08Mpz1CB/0FO5Es0iQRpzxvS5I=
github.com/minio/md5-simd v1.1.0/go.mod h1:XpBqgZULrMYD3R+M28PcmP0CkI7PEMzB3U77ZrKZ0Gw=
github.com/minio/md5-simd v1.1.1 h1:9ojcLbuZ4gXbB2sX53MKn8JUZ0sB/2wfwsEcRw+I08U=
github.com/minio/md5-simd v1.1.1/go.mod h1:XpBqgZULrMYD3R+M28PcmP0CkI7PEMzB3U77ZrKZ0Gw=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.10/go.mod h1:td4gW1ldOsj1PbSNS+WYK43j+P1XVhX/8W8awaYlBFo=
github.com/minio/minio-go/v7 v7.0.11-0.20210302210017-6ae69c73ce78/go.mod h1:mTh2uJuAbEqdhMVl6CMIIZLUeiMiWtJR4JB8/5g2skw=
github.com/minio/minio-go/v7 v7.0.11-0.20210607181445-e162fdb8e584/go.mod h1:WoyW+ySKAKjY98B9+7ZbI8z8S3jaxaisdcvj9TGlazA=
github.com/minio/minio-go/v7 v7.0.14/go.mod h1:S23iSP5/gbMwtxeY5FM71R+TkAYyzEdoNEDDwpt8yWs=
github.com/minio/minio-go/v7 v7.0.15-0.20211004160302-3b57c1e369ca/go.mod h1:pUV0Pc+hPd1nccgmzQF/EXh48l/Z/yps6QPF1aaie4g=
github.com/minio/minio-go/v7 v7.0.15 h1:r9/NhjJ+nXYrIYvbObhvc1wPj3YH1iDpJzz61uRKLyY=
github.com/minio/minio-go/v7 v7.0.15/go.mod h1:pUV0Pc+hPd1nccgmzQF/EXh48l/Z/yps6QPF1aaie4g=
github.com/minio/operator v0.0.0-20211011212245-31460bbbc4b7 h1:dkfuMNslMjGoJ4ArAMSoQhidYNdm3SgzLBP+f96O3/E=
github.com/minio/operator v0.0.0-20211011212245-31460bbbc4b7/go.mod h1:lDpuz8nwsfhKlfiBaA3Z8AW019fWEAjO2gltfLbdorE=
github.com/minio/operator/logsearchapi v0.0.0-20211011212245-31460bbbc4b7 h1:vFtQqCt67ETp0JAkOKRWTKkgwFv14Vc1jJSxmQ8wJE0=
github.com/minio/operator/logsearchapi v0.0.0-20211011212245-31460bbbc4b7/go.mod h1:R+38Pf3wfm+JMiyLPb/r8OMrBm0vK2hZgUT4y4aYoSY=
github.com/minio/parquet-go v1.0.0 h1:fcWsEvub04Nsl/4hiRBDWlbqd6jhacQieV07a+nhiIk=
github.com/minio/parquet-go v1.0.0/go.mod h1:aQlkSOfOq2AtQKkuou3mosNVMwNokd+faTacxxk/oHA=
github.com/minio/parquet-go v1.1.0 h1:j2Fn1/h7Ts/0qzdMZU9oCUKr0IJwRTD9Hg9QJyVaN6A=
github.com/minio/parquet-go v1.1.0/go.mod h1:nnAkbt2CG/DCQ3trcV3uyvwns4VjyoINF5vMqF5efOE=
github.com/minio/pkg v1.0.3/go.mod h1:obU54TZ9QlMv0TRaDgQ/JTzf11ZSXxnSfLrm4tMtBP8=
github.com/minio/pkg v1.0.4/go.mod h1:obU54TZ9QlMv0TRaDgQ/JTzf11ZSXxnSfLrm4tMtBP8=
github.com/minio/pkg v1.0.8/go.mod h1:32x/3OmGB0EOi1N+3ggnp+B5VFkSBBB9svPMVfpnf14=
github.com/minio/pkg v1.0.11/go.mod h1:32x/3OmGB0EOi1N+3ggnp+B5VFkSBBB9svPMVfpnf14=
github.com/minio/pkg v1.1.5 h1:phwKkJBQdVLyxOXC3RChPVGLtebplzQJ5jJ3l/HBvnk=
github.com/minio/pkg v1.1.3/go.mod h1:32x/3OmGB0EOi1N+3ggnp+B5VFkSBBB9svPMVfpnf14=
github.com/minio/pkg v1.1.5/go.mod h1:32x/3OmGB0EOi1N+3ggnp+B5VFkSBBB9svPMVfpnf14=
github.com/minio/pkg v1.1.6 h1:rCVrsniDzb031SRwSXtxxJQPntItQR02IXYgM6CirOw=
github.com/minio/pkg v1.1.6/go.mod h1:32x/3OmGB0EOi1N+3ggnp+B5VFkSBBB9svPMVfpnf14=
github.com/minio/selfupdate v0.3.1 h1:BWEFSNnrZVMUWXbXIgLDNDjbejkmpAmZvy/nCz1HlEs=
github.com/minio/selfupdate v0.3.1/go.mod h1:b8ThJzzH7u2MkF6PcIra7KaXO9Khf6alWPvMSyTDCFM=
github.com/minio/sha256-simd v0.1.1/go.mod h1:B5e1o+1/KgNmWrSQK08Y6Z1Vb5pwIktudl0J58iy0KM=
@ -1197,8 +1252,9 @@ github.com/pelletier/go-toml v1.8.0/go.mod h1:D6yutnOGMveHEPV7VQOuvI/gXY61bv+9bA
github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9iaPbIdPPGyKcA8hKdoy6hAWba7Yac=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d/go.mod h1:3OzsM7FXDQlpCiw2j81fOmAwQLnZnLGXVKUzeKQXIAw=
github.com/philhofer/fwd v1.1.1 h1:GdGcTjf5RNAxwS4QLsiMzJYj5KEvPJD3Abr261yRQXQ=
github.com/philhofer/fwd v1.1.1/go.mod h1:gk3iGcWd9+svBvR0sR+KPcfE+RNWozjowpeBVG3ZVNU=
github.com/philhofer/fwd v1.1.2-0.20210722190033-5c56ac6d0bb9 h1:6ob53CVz+ja2i7easAStApZJlh7sxyq3Cm7g1Di6iqA=
github.com/philhofer/fwd v1.1.2-0.20210722190033-5c56ac6d0bb9/go.mod h1:gk3iGcWd9+svBvR0sR+KPcfE+RNWozjowpeBVG3ZVNU=
github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pierrec/lz4 v2.5.2+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
@ -1209,10 +1265,10 @@ github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
github.com/pkg/profile v1.3.0 h1:OQIvuDgm00gWVWGTf4m4mCt6W1/0YqU7Ntg0mySWgaI=
github.com/pkg/profile v1.3.0/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
github.com/pkg/xattr v0.4.1 h1:dhclzL6EqOXNaPDWqoeb9tIxATfBSmjqL0b4DpSjwRw=
github.com/pkg/xattr v0.4.1/go.mod h1:W2cGD0TBEus7MkUgv0tNZ9JutLtVO3cXu+IBRuHqnFs=
github.com/pkg/profile v1.6.0 h1:hUDfIISABYI59DyeB3OTay/HxSRwTQ8rB/H83k6r5dM=
github.com/pkg/profile v1.6.0/go.mod h1:qBsxPvzyUincmltOk6iyRVxHYg4adc0OFOv72ZdLa18=
github.com/pkg/xattr v0.4.3 h1:5Jx4GCg5ABtqWZH8WLzeI4fOtM1HyX4RBawuCoua1es=
github.com/pkg/xattr v0.4.3/go.mod h1:sBD3RAqlr8Q+RC3FutZcikpT8nyDrIEEBw2J744gVWs=
github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
@ -1232,8 +1288,9 @@ github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5Fsn
github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
github.com/prometheus/client_golang v1.5.1/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.8.0 h1:zvJNkoCFAnYFNC24FV8nW4JdRJ3GIFcLbg65lL/JDcw=
github.com/prometheus/client_golang v1.8.0/go.mod h1:O9VU6huf47PktckDQfMTX0Y8tY0/7TSWwj+ITvv0TnM=
github.com/prometheus/client_golang v1.11.0 h1:HNkLOAEQMIDv/K+04rukrLx6ch7msSRwf3/SASFAGtQ=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@ -1250,8 +1307,10 @@ github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt2
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.13.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/common v0.14.0 h1:RHRyE8UocrbjU+6UvRzwi6HjiDfxrrBU91TtbKzkGp4=
github.com/prometheus/common v0.14.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.31.1 h1:d18hG4PkHnNAKNMOmFuXFaiY8Us0nird/2m60uS1AMs=
github.com/prometheus/common v0.31.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
@ -1285,8 +1344,9 @@ github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/rs/dnscache v0.0.0-20210201191234-295bba877686 h1:IJ6Df0uxPDtNoByV0KkzVKNseWvZFCNM/S9UoyOMCSI=
github.com/rs/dnscache v0.0.0-20210201191234-295bba877686/go.mod h1:qe5TWALJ8/a1Lqznoc5BDHpYX/8HU60Hm2AwRmqzxqA=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/xid v1.3.0 h1:6NjYksEUlhurdVehpc7S7dk6DAmcKv8V9gG0FsVN2U4=
github.com/rs/xid v1.3.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=
github.com/rs/zerolog v1.15.0/go.mod h1:xYTKnLHcpfU2225ny5qZjxnj9NvkumZYjJHlAThCjNc=
github.com/rubiojr/go-vhd v0.0.0-20160810183302-0bfd3b39853c/go.mod h1:DM5xW0nvfNNm2uytzsvhI3OnX8uzaRAg8UX/CnDqbto=
@ -1314,6 +1374,7 @@ github.com/shirou/gopsutil v0.0.0-20190901111213-e4ec7b275ada/go.mod h1:WWnYX4lz
github.com/shirou/gopsutil/v3 v3.21.4/go.mod h1:ghfMypLDrFSWN2c9cDYFLHyynQ+QUht0cv/18ZqVczw=
github.com/shirou/gopsutil/v3 v3.21.5/go.mod h1:ghfMypLDrFSWN2c9cDYFLHyynQ+QUht0cv/18ZqVczw=
github.com/shirou/gopsutil/v3 v3.21.6/go.mod h1:JfVbDpIBLVzT8oKbvMg9P3wEIMDDpVn+LwHTKj0ST88=
github.com/shirou/gopsutil/v3 v3.21.8/go.mod h1:YWp/H8Qs5fVmf17v7JNZzA0mPJ+mS2e9JdiUF9LlKzQ=
github.com/shirou/gopsutil/v3 v3.21.9 h1:Vn4MUz2uXhqLSiCbGFRc0DILbMVLAY92DSkT8bsYrHg=
github.com/shirou/gopsutil/v3 v3.21.9/go.mod h1:YWp/H8Qs5fVmf17v7JNZzA0mPJ+mS2e9JdiUF9LlKzQ=
github.com/shirou/w32 v0.0.0-20160930032740-bb4de0191aa4/go.mod h1:qsXQc7+bwAM3Q1u/4XEfrquwF8Lw7D7y5cD8CuHnfIc=
@ -1387,22 +1448,23 @@ github.com/tdakkota/asciicheck v0.0.0-20200416190851-d7f85be797a2/go.mod h1:yHp0
github.com/tdakkota/asciicheck v0.0.0-20200416200610-e657995f937b/go.mod h1:yHp0ai0Z9gUljN3o0xMhYJnH/IcvkdTBOX2fmJ93JEM=
github.com/tetafro/godot v0.3.7/go.mod h1:/7NLHhv08H1+8DNj0MElpAACw1ajsCuf3TKNQxA5S+0=
github.com/tetafro/godot v0.4.2/go.mod h1:/7NLHhv08H1+8DNj0MElpAACw1ajsCuf3TKNQxA5S+0=
github.com/tidwall/gjson v1.7.4/go.mod h1:5/xDoumyyDNerp2U36lyolv46b3uF/9Bu6OfyQ9GImk=
github.com/tidwall/gjson v1.7.5 h1:zmAN/xmX7OtpAkv4Ovfso60r/BiCi5IErCDYGNJu+uc=
github.com/tidwall/gjson v1.7.5/go.mod h1:5/xDoumyyDNerp2U36lyolv46b3uF/9Bu6OfyQ9GImk=
github.com/tidwall/match v1.0.3 h1:FQUVvBImDutD8wJLN6c5eMzWtjgONK9MwIBCOrUJKeE=
github.com/tidwall/match v1.0.3/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/gjson v1.10.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.11.0 h1:C16pk7tQNiH6VlCrtIXL1w8GaOsi1X3W8KDkE1BuYd4=
github.com/tidwall/gjson v1.11.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tidwall/pretty v1.1.0 h1:K3hMW5epkdAVwibsQEfR/7Zj0Qgt4DxtNumTq/VloO8=
github.com/tidwall/pretty v1.1.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tidwall/sjson v1.1.6 h1:8fDdlahON04OZBlTQCIatW8FstSFJz8oxidj5h0rmSQ=
github.com/tidwall/sjson v1.1.6/go.mod h1:KN3FZ7odvXIHPbJdhNorK/M9lWweVUbXsXXhrJ/kGOA=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/sjson v1.2.3 h1:5+deguEhHSEjmuICXZ21uSSsXotWMA0orU783+Z7Cp8=
github.com/tidwall/sjson v1.2.3/go.mod h1:5WdjKx3AQMvCJ4RG6/2UYT7dLrGvJUV1x4jdTAyGvZs=
github.com/timakin/bodyclose v0.0.0-20190930140734-f7f2e9bca95e/go.mod h1:Qimiffbc6q9tBWlVV6x0P9sat/ao1xEkREYPPj9hphk=
github.com/timakin/bodyclose v0.0.0-20200424151742-cb6215831a94/go.mod h1:Qimiffbc6q9tBWlVV6x0P9sat/ao1xEkREYPPj9hphk=
github.com/tinylib/msgp v1.1.3/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tinylib/msgp v1.1.5/go.mod h1:eQsjooMTnV42mHu917E26IogZ2930nFyBQdofk10Udg=
github.com/tinylib/msgp v1.1.6 h1:i+SbKraHhnrf9M5MYmvQhFnbLhAXSDWF8WWsuyRdocw=
github.com/tinylib/msgp v1.1.6/go.mod h1:75BAfg2hauQhs3qedfdDZmWAPcFMAvJE5b9rGOMufyw=
github.com/tinylib/msgp v1.1.7-0.20211026165309-e818a1881b0e h1:P5tyWbssToKowBPTA1/EzqPXwrZNc8ZeNPdjgpcDEoI=
github.com/tinylib/msgp v1.1.7-0.20211026165309-e818a1881b0e/go.mod h1:g7jEyb18KPe65d9RRhGw+ThaJr5duyBH8eaFgBUor7Y=
github.com/tj/assert v0.0.0-20171129193455-018094318fb0/go.mod h1:mZ9/Rh9oLWpLLDRpvE+3b7gP/C2YyLFYxNmcLnPTMe0=
github.com/tj/go-elastic v0.0.0-20171221160941-36157cbbebc2/go.mod h1:WjeM0Oo1eNAjXGDx2yma7uG2XoyRZTq1uv3M/o7imD0=
github.com/tj/go-kinesis v0.0.0-20171128231115-08b17f58cb1b/go.mod h1:/yhzCV0xPfx6jb1bBgRFjl5lytqVqZXEaeqWP8lTEao=
@ -1468,12 +1530,15 @@ go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489 h1:1JFLBqwIgdyHN1ZtgjTBwO+blA6gVOmZurpiMEsETKo=
go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489/go.mod h1:yVHk9ub3CSBatqGNg7GRmsnfLWtoW60w4eDYfh7vHDg=
go.etcd.io/etcd/api/v3 v3.5.0-beta.4 h1:etIejKeELg3fIXt0i71TXtx1OjK9q+oegcv00zipiis=
go.etcd.io/etcd/api/v3 v3.5.0-beta.4/go.mod h1:yF0YUmBghT48aC0/eTFrhULo+uKQAr5spQQ6sRhPauE=
go.etcd.io/etcd/client/pkg/v3 v3.5.0-beta.4 h1:IVvCfkch8truS86wSy67AbnXCYq8nYpM8NPTW14Ttp0=
go.etcd.io/etcd/api/v3 v3.5.0 h1:GsV3S+OfZEOCNXdtNkBSR7kgLobAa/SO6tCxRa0GAYw=
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/client/pkg/v3 v3.5.0-beta.4/go.mod h1:a+pbz+UrcOpvve1Qxf6tGovi15PjgtRhi0QTO2Nlc4U=
go.etcd.io/etcd/client/v3 v3.5.0-beta.4 h1:AW/Sj3Oq7ZYjgNsYU0xad/tu3BH/Jnn2OuLvQoUNMEM=
go.etcd.io/etcd/client/pkg/v3 v3.5.0 h1:2aQv6F436YnN7I4VbI8PPYrBhu+SmrTaADcf8Mi/6PU=
go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v3 v3.5.0-beta.4/go.mod h1:0L1RulN1QSXq6uKPMUSX+OTAYyFkapMK7iUHXXIH/1E=
go.etcd.io/etcd/client/v3 v3.5.0 h1:62Eh0XOro+rDwkrypAGDfgmNh5Joq+z+W9HZdlXMzek=
go.etcd.io/etcd/client/v3 v3.5.0/go.mod h1:AIKXXVX/DQXtfTEqBryiLTUXwON+GuvO6Z7lLS/oTh0=
go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.2/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
@ -1491,28 +1556,36 @@ go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5 h1:dntmOdLpSpHlVqbW5Eay97DelsZHe+55D+xC6i0dDS0=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 h1:sHOAIxRGBp443oHZIPB+HsUGaksVCXVQENPxwTfQdH4=
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/multierr v1.7.0 h1:zaiO/rmgFjbmCXdSYJWQcdvOCsthmdaHfr3Gm2Kx4Ec=
go.uber.org/multierr v1.7.0/go.mod h1:7EAYxJLBy9rStEaz58O2t4Uvip6FSURkq8/ppBp95ak=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
go.uber.org/zap v1.8.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
go.uber.org/zap v1.15.0/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc=
go.uber.org/zap v1.16.1-0.20210329175301-c23abee72d19 h1:040c3dLNhgFQkoojH2AMpHCy4SrvhmxdU72d9GLGGE0=
go.uber.org/zap v1.16.1-0.20210329175301-c23abee72d19/go.mod h1:aMfIlz3TDBfB0BwTCKFU1XbEmj9zevr5S5LcBr85MXw=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.19.1 h1:ue41HOKd1vGURxrmeKIgELGb3jPW9DMUDGtsinblHwI=
go.uber.org/zap v1.19.1/go.mod h1:j3DNczoxDZroyBnOT1L/Q79cfUMGZxlv/9dzN7SM1rI=
gocloud.dev v0.19.0/go.mod h1:SmKwiR8YwIMMJvQBKLsC3fHNyMwXLw3PMDO+VVteJMI=
golang.org/x/arch v0.0.0-20201008161808-52c3e6f60cff/go.mod h1:flIaEI6LNU6xOCD5PaJvn9wGP0agmIOqjrtsKGRguv4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
@ -1581,8 +1654,9 @@ golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHl
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b h1:Wh+f8QHJXR411sJR8/vRBTZ7YapZaRvUcLFFJhusH0k=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
@ -1592,8 +1666,8 @@ golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzB
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.1-0.20200828183125-ce943fd02449/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2 h1:Gz96sIWK3OalVv/I/qNygP42zyoKp3xptRVCWRFEBvo=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -1645,16 +1719,20 @@ golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81R
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200904194848-62affa334b73/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210421230115-4e50805a0758/go.mod h1:72T/g9IO56b78aLF+1Kcs5dz7/ng1VjMUvfKvpfy+jM=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f h1:OfiFi4JbukWwe3lzw+xunroH1mnC1e2Gy5cxNJApiSY=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210928044308-7d9f5e0b762b/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211020060615-d418f374d309 h1:A0lJIi+hcTR6aajJH4YqKWwohY4aW9RO7oRMcdv+HKI=
golang.org/x/net v0.0.0-20211020060615-d418f374d309/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@ -1663,8 +1741,18 @@ golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4Iltr
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f h1:Qmd2pbz05z7z6lm0DrgQVVPuBm92jqujBKMHMOlOQEw=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -1685,7 +1773,6 @@ golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180926160741-c2ed4eda69e7/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181021155630-eda9bb28ed51/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -1717,7 +1804,6 @@ golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -1750,32 +1836,49 @@ golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200828194041-157a740278f4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200831180312-196b9ba8737a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201015000850-e3ed0017c211/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201101102859-da207088b7d1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210217105451-b926d437f341/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210316164454-77fc1eacc6aa/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210816074244-15123e1e1f71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211015200801-69063c4bb744 h1:KzbpndAYEM+4oHRp9JmB2ewj0NHHxO3Z0g7Gus2O1kk=
golang.org/x/sys v0.0.0-20211015200801-69063c4bb744/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210917161153-d61c044b1678/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211020174200-9d6173849985 h1:LOlKVhfDyahgmqa97awczplwkjzNaELFg3zRIJ13RYo=
golang.org/x/sys v0.0.0-20211020174200-9d6173849985/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210406210042-72f3dc4e9b72 h1:VqE9gduFZ4dbR7XoL77lHFp0/DyDUBKSXK7CMFkVcV0=
golang.org/x/term v0.0.0-20210406210042-72f3dc4e9b72/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -1785,16 +1888,18 @@ golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba h1:O8mE0/t419eoIwhTFpKVkHiTs/Igowgfkj25AcZrtiE=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac h1:7zkz7BUtwNFFqcowJ+RIgu2MaV/MapERkDIy+mwPyjs=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@ -1849,7 +1954,6 @@ golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191216052735-49a3e744a425/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200102140908-9497f49d5709/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
@ -1884,15 +1988,23 @@ golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200828161849-5deb26317202/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
golang.org/x/tools v0.0.0-20200918232735-d647fc253266/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201022035929-9cf592e881e9/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201105001634-bc3cf281b174/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210114065538-d78b04bdf963/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.1 h1:wGiQel/hW0NnEkJUk8lbzkX2gFJU6PFxf1v5OlCfuOs=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -1925,8 +2037,21 @@ google.golang.org/api v0.25.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0M
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
google.golang.org/api v0.31.0 h1:1w5Sz/puhxFo9lTtip2n47k7toB/U2nCqOKNHd3Yrbo=
google.golang.org/api v0.31.0/go.mod h1:CL+9IBCa2WWU6gRuBWaKqGWLFFwbEUXkfeMkHLQWYWo=
google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE=
google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo=
google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4=
google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNefaw=
google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU=
google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k=
google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
google.golang.org/api v0.58.0 h1:MDkAbYIB1JpSgCTOCYYoIec/coMlKK4oVbpnBLLcyT0=
google.golang.org/api v0.58.0/go.mod h1:cAbP2FsxoGVNwtgNAmmn3y5G1TWAiVYRmg4yku3lv+E=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.3.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@ -1934,8 +2059,9 @@ google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.6 h1:lMO5rYAqUxkmaj76jAkRUvt5JZgFymx/+Q5Mzfivuhc=
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
@ -1975,8 +2101,34 @@ google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200831141814-d751682dd103/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200901141002-b3bf27a9dbd1/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a h1:pOwg4OoaRYScjmR4LlLgdtnyoHYTSAVhhqe5uPdpII8=
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210624195500-8bfb893ecb84/go.mod h1:SzzZ/N+nwJDaO1kznhnlzqS8ocJICar6hYhVyhi++24=
google.golang.org/genproto v0.0.0-20210713002101-d411969a0d9a/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701ea/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
google.golang.org/genproto v0.0.0-20210728212813-7823e685a01f/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
google.golang.org/genproto v0.0.0-20210805201207-89edb61ffb67/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
google.golang.org/genproto v0.0.0-20210813162853-db860fec028c/go.mod h1:cFeNkxwySK631ADgubI+/XFU/xp8FD5KIVV4rj8UC5w=
google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210917145530-b395a37504d4/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210928142010-c7af6a1a74c9 h1:XTH066D35LyHehRwlYhoK3qA+Hcgvg5xREG4kFQEW1Y=
google.golang.org/genproto v0.0.0-20210928142010-c7af6a1a74c9/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@ -1998,8 +2150,20 @@ google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.37.0 h1:uSZWeQJX5j11bIQ4AJoj+McDBo29cY1MCoC1wO3ts+c=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.41.0 h1:f+PlOh7QV4iIJkPrx5NQ7qaNGFQ3OTse67yaDHfju4E=
google.golang.org/grpc v1.41.0/go.mod h1:U3l9uK9J0sini8mHphKoXyaqDA/8VyGnDee1zzIUK6k=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@ -2011,8 +2175,9 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d/go.mod h1:cuepJuh7vyXfUyUwEgHQXw849cJrilpS5NeIjOWESAw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

Some files were not shown because too many files have changed in this diff Show more