2020-07-22 18:01:29 +02:00
|
|
|
package statistics
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
|
|
|
|
import (
|
|
|
|
"math"
|
|
|
|
"sync"
|
|
|
|
"time"
|
|
|
|
|
2020-07-22 18:01:29 +02:00
|
|
|
"github.com/matrix-org/dendrite/federationsender/storage"
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
"github.com/matrix-org/gomatrixserverlib"
|
2020-07-22 18:01:29 +02:00
|
|
|
"github.com/sirupsen/logrus"
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
"go.uber.org/atomic"
|
|
|
|
)
|
|
|
|
|
|
|
|
// Statistics contains information about all of the remote federated
|
|
|
|
// hosts that we have interacted with. It is basically a threadsafe
|
|
|
|
// wrapper.
|
|
|
|
type Statistics struct {
|
2020-07-22 18:01:29 +02:00
|
|
|
DB storage.Database
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
servers map[gomatrixserverlib.ServerName]*ServerStatistics
|
|
|
|
mutex sync.RWMutex
|
2020-07-22 18:01:29 +02:00
|
|
|
|
|
|
|
// How many times should we tolerate consecutive failures before we
|
|
|
|
// just blacklist the host altogether? The backoff is exponential,
|
|
|
|
// so the max time here to attempt is 2**failures seconds.
|
|
|
|
FailuresUntilBlacklist uint32
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// ForServer returns server statistics for the given server name. If it
|
|
|
|
// does not exist, it will create empty statistics and return those.
|
|
|
|
func (s *Statistics) ForServer(serverName gomatrixserverlib.ServerName) *ServerStatistics {
|
|
|
|
// If the map hasn't been initialised yet then do that.
|
|
|
|
if s.servers == nil {
|
|
|
|
s.mutex.Lock()
|
|
|
|
s.servers = make(map[gomatrixserverlib.ServerName]*ServerStatistics)
|
|
|
|
s.mutex.Unlock()
|
|
|
|
}
|
|
|
|
// Look up if we have statistics for this server already.
|
|
|
|
s.mutex.RLock()
|
|
|
|
server, found := s.servers[serverName]
|
|
|
|
s.mutex.RUnlock()
|
|
|
|
// If we don't, then make one.
|
|
|
|
if !found {
|
|
|
|
s.mutex.Lock()
|
2020-07-22 18:01:29 +02:00
|
|
|
server = &ServerStatistics{
|
|
|
|
statistics: s,
|
|
|
|
serverName: serverName,
|
|
|
|
}
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
s.servers[serverName] = server
|
|
|
|
s.mutex.Unlock()
|
2020-07-22 18:01:29 +02:00
|
|
|
blacklisted, err := s.DB.IsServerBlacklisted(serverName)
|
|
|
|
if err != nil {
|
|
|
|
logrus.WithError(err).Errorf("Failed to get blacklist entry %q", serverName)
|
|
|
|
} else {
|
|
|
|
server.blacklisted.Store(blacklisted)
|
|
|
|
}
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
}
|
|
|
|
return server
|
|
|
|
}
|
|
|
|
|
|
|
|
// ServerStatistics contains information about our interactions with a
|
|
|
|
// remote federated host, e.g. how many times we were successful, how
|
|
|
|
// many times we failed etc. It also manages the backoff time and black-
|
|
|
|
// listing a remote host if it remains uncooperative.
|
|
|
|
type ServerStatistics struct {
|
2020-07-22 18:01:29 +02:00
|
|
|
statistics *Statistics //
|
|
|
|
serverName gomatrixserverlib.ServerName //
|
|
|
|
blacklisted atomic.Bool // is the node blacklisted
|
2020-08-07 19:50:29 +02:00
|
|
|
backoffStarted atomic.Bool // is the backoff started
|
2020-08-20 15:58:53 +02:00
|
|
|
backoffUntil atomic.Value // time.Time until this backoff interval ends
|
2020-08-07 19:50:29 +02:00
|
|
|
backoffCount atomic.Uint32 // number of times BackoffDuration has been called
|
2020-07-22 18:01:29 +02:00
|
|
|
successCounter atomic.Uint32 // how many times have we succeeded?
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
}
|
|
|
|
|
2020-08-20 15:58:53 +02:00
|
|
|
// duration returns how long the next backoff interval should be.
|
|
|
|
func (s *ServerStatistics) duration(count uint32) time.Duration {
|
|
|
|
return time.Second * time.Duration(math.Exp2(float64(count)))
|
|
|
|
}
|
|
|
|
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
// Success updates the server statistics with a new successful
|
|
|
|
// attempt, which increases the sent counter and resets the idle and
|
|
|
|
// failure counters. If a host was blacklisted at this point then
|
|
|
|
// we will unblacklist it.
|
|
|
|
func (s *ServerStatistics) Success() {
|
|
|
|
s.successCounter.Add(1)
|
2020-08-07 19:50:29 +02:00
|
|
|
s.backoffStarted.Store(false)
|
|
|
|
s.backoffCount.Store(0)
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
s.blacklisted.Store(false)
|
2020-08-07 19:50:29 +02:00
|
|
|
if s.statistics.DB != nil {
|
|
|
|
if err := s.statistics.DB.RemoveServerFromBlacklist(s.serverName); err != nil {
|
|
|
|
logrus.WithError(err).Errorf("Failed to remove %q from blacklist", s.serverName)
|
|
|
|
}
|
2020-07-22 18:01:29 +02:00
|
|
|
}
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
}
|
|
|
|
|
2020-08-07 19:50:29 +02:00
|
|
|
// Failure marks a failure and starts backing off if needed.
|
|
|
|
// The next call to BackoffIfRequired will do the right thing
|
2020-08-20 15:58:53 +02:00
|
|
|
// after this. It will return the time that the current failure
|
|
|
|
// will result in backoff waiting until, and a bool signalling
|
|
|
|
// whether we have blacklisted and therefore to give up.
|
|
|
|
func (s *ServerStatistics) Failure() (time.Time, bool) {
|
|
|
|
// If we aren't already backing off, this call will start
|
|
|
|
// a new backoff period. Reset the counter to 0 so that
|
|
|
|
// we backoff only for short periods of time to start with.
|
2020-08-07 19:50:29 +02:00
|
|
|
if s.backoffStarted.CAS(false, true) {
|
|
|
|
s.backoffCount.Store(0)
|
|
|
|
}
|
2020-08-20 15:58:53 +02:00
|
|
|
|
|
|
|
// Check if we have blacklisted this node.
|
|
|
|
if s.blacklisted.Load() {
|
|
|
|
return time.Now(), true
|
|
|
|
}
|
|
|
|
|
|
|
|
// If we're already backing off and we haven't yet surpassed
|
|
|
|
// the deadline then return that. Repeated calls to Failure
|
|
|
|
// within a single backoff interval will have no side effects.
|
|
|
|
if until, ok := s.backoffUntil.Load().(time.Time); ok && !time.Now().After(until) {
|
|
|
|
return until, false
|
|
|
|
}
|
|
|
|
|
|
|
|
// We're either backing off and have passed the deadline, or
|
|
|
|
// we aren't backing off, so work out what the next interval
|
|
|
|
// will be.
|
|
|
|
count := s.backoffCount.Load()
|
|
|
|
until := time.Now().Add(s.duration(count))
|
|
|
|
s.backoffUntil.Store(until)
|
|
|
|
return until, false
|
2020-08-07 19:50:29 +02:00
|
|
|
}
|
|
|
|
|
2020-08-20 18:03:07 +02:00
|
|
|
// BackoffInfo returns information about the current or previous backoff.
|
|
|
|
// Returns the last backoffUntil time and whether the server is currently blacklisted or not.
|
|
|
|
func (s *ServerStatistics) BackoffInfo() (*time.Time, bool) {
|
|
|
|
until, ok := s.backoffUntil.Load().(time.Time)
|
|
|
|
if ok {
|
|
|
|
return &until, s.blacklisted.Load()
|
|
|
|
}
|
|
|
|
return nil, s.blacklisted.Load()
|
|
|
|
}
|
|
|
|
|
2020-08-07 19:50:29 +02:00
|
|
|
// BackoffIfRequired will block for as long as the current
|
|
|
|
// backoff requires, if needed. Otherwise it will do nothing.
|
2020-08-20 18:03:07 +02:00
|
|
|
// Returns the amount of time to backoff for and whether to give up or not.
|
2020-08-07 19:50:29 +02:00
|
|
|
func (s *ServerStatistics) BackoffIfRequired(backingOff atomic.Bool, interrupt <-chan bool) (time.Duration, bool) {
|
|
|
|
if started := s.backoffStarted.Load(); !started {
|
|
|
|
return 0, false
|
|
|
|
}
|
|
|
|
|
|
|
|
// Work out if we should be blacklisting at this point.
|
2020-08-20 15:58:53 +02:00
|
|
|
count := s.backoffCount.Inc()
|
2020-08-07 19:50:29 +02:00
|
|
|
if count >= s.statistics.FailuresUntilBlacklist {
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
// We've exceeded the maximum amount of times we're willing
|
|
|
|
// to back off, which is probably in the region of hours by
|
|
|
|
// now. Mark the host as blacklisted and tell the caller to
|
|
|
|
// give up.
|
|
|
|
s.blacklisted.Store(true)
|
2020-08-07 19:50:29 +02:00
|
|
|
if s.statistics.DB != nil {
|
|
|
|
if err := s.statistics.DB.AddServerToBlacklist(s.serverName); err != nil {
|
|
|
|
logrus.WithError(err).Errorf("Failed to add %q to blacklist", s.serverName)
|
|
|
|
}
|
2020-07-22 18:01:29 +02:00
|
|
|
}
|
2020-08-20 15:58:53 +02:00
|
|
|
return 0, true
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
}
|
|
|
|
|
2020-08-20 15:58:53 +02:00
|
|
|
// Work out when we should wait until.
|
|
|
|
duration := s.duration(count)
|
|
|
|
until := time.Now().Add(duration)
|
|
|
|
s.backoffUntil.Store(until)
|
|
|
|
|
2020-08-07 19:50:29 +02:00
|
|
|
// Notify the destination queue that we're backing off now.
|
|
|
|
backingOff.Store(true)
|
|
|
|
defer backingOff.Store(false)
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
|
2020-08-07 19:50:29 +02:00
|
|
|
// Work out how long we should be backing off for.
|
|
|
|
logrus.Warnf("Backing off %q for %s", s.serverName, duration)
|
|
|
|
|
|
|
|
// Wait for either an interruption or for the backoff to
|
|
|
|
// complete.
|
|
|
|
select {
|
|
|
|
case <-interrupt:
|
|
|
|
logrus.Debugf("Interrupting backoff for %q", s.serverName)
|
|
|
|
case <-time.After(duration):
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
}
|
2020-08-07 19:50:29 +02:00
|
|
|
|
|
|
|
return duration, false
|
Improve federation sender performance, implement backoff and blacklisting, fix up invites a bit (#1007)
* Improve federation sender performance and behaviour, add backoff
* Tweaks
* Tweaks
* Tweaks
* Take copies of events before passing to destination queues
* Don't accidentally drop queued messages
* Don't take copies again
* Tidy up a bit
* Break out statistics (tracked component-wide), report success and failures from Perform actions
* Fix comment, use atomic add
* Improve logic a bit, don't block on wakeup, move idle check
* Don't retry sucessful invites, don't dispatch sendEvent, sendInvite etc
* Dedupe destinations, fix other bug hopefully
* Dispatch sends again
* Federation sender to ignore invites that are destined locally
* Loopback invite events
* Remodel a bit with channels
* Linter
* Only loopback invite event if we know the room
* We should tell other resident servers about the invite if we know about the room
* Correct invite signing
* Fix invite loopback
* Check HTTP response codes, push new invites to front of queue
* Review comments
2020-05-07 13:42:06 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Blacklisted returns true if the server is blacklisted and false
|
|
|
|
// otherwise.
|
|
|
|
func (s *ServerStatistics) Blacklisted() bool {
|
|
|
|
return s.blacklisted.Load()
|
|
|
|
}
|
|
|
|
|
|
|
|
// SuccessCount returns the number of successful requests. This is
|
|
|
|
// usually useful in constructing transaction IDs.
|
|
|
|
func (s *ServerStatistics) SuccessCount() uint32 {
|
|
|
|
return s.successCounter.Load()
|
|
|
|
}
|