Fix generating statistics for time periods smaller than we can measure (#90069)

If the time period for the mean/time weighted average was smaller
than we can measure (less than one microsecond), generating
statistics would fail with a divide by zero error. This is likely
only happens if the database schema precision is incorrect.
This commit is contained in:
J. Nick Koston 2023-03-21 15:12:45 -10:00 committed by GitHub
parent 0e7ffff869
commit 88ad97f112
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
2 changed files with 344 additions and 1 deletions

View file

@ -119,7 +119,16 @@ def _time_weighted_average(
duration = end - old_start_time
accumulated += old_fstate * duration.total_seconds()
return accumulated / (end - start).total_seconds()
period_seconds = (end - start).total_seconds()
if period_seconds == 0:
# If the only state changed that happened was at the exact moment
# at the end of the period, we can't calculate a meaningful average
# so we return 0.0 since it represents a time duration smaller than
# we can measure. This probably means the precision of statistics
# column schema in the database is incorrect but it is actually possible
# to happen if the state change event fired at the exact microsecond
return 0.0
return accumulated / period_seconds
def _get_units(fstates: list[tuple[float, State]]) -> set[str | None]: