d01465cf6d
We currently have a nasty issue with archive assets wherein they read their entire contents into memory each time they are accessed (e.g. for hashing or translation). This interacts badly with scenarios that place large amounts of data in an archive: aside from limiting the size of an archive the engine can handle, it also bloats the engine's memory requirements. This appears to have caused issues when running the PPC in AWS: evidence suggests that the very high peak memory requirements this approach implies caused high swap traffic that impacted the service's availability. In order to fix this issue, these changes move archives onto a streaming read model. In order to read an archive, a user: - Opens the archive with `Archive.Open`. This returns an ArchiveReader. - Iterates over its contents using `ArchiveReader.Next`. Each returned blob must be read in full between successive calls to `ArchiveReader.Next`. This requirement is essentially forced upon us by the streaming nature of TAR archives. - Closes the ArchiveReader with `ArchiveReader.Close`. This model does not require that the complete contents of the archive or any of its constituent files are in memory at any given time. Fixes #325. |
||
---|---|---|
.. | ||
config | ||
deploy | ||
idl | ||
plugin | ||
provider | ||
stack | ||
testdata | ||
asset.go | ||
asset_test.go | ||
errors.go | ||
properties.go | ||
properties_diff.go | ||
properties_diff_test.go | ||
properties_test.go | ||
resource_goal.go | ||
resource_id.go | ||
resource_id_test.go | ||
resource_state.go | ||
status.go | ||
urn.go | ||
urn_test.go |