* Increase websocket peak messages to match max expected entities
During startup the websocket would frequently disconnect if more than
4096 entities were added back to back. Some MQTT setups will have more
than 10000 entities. Match the websocket peak value to the max expected
entities
* coalesce more
* delay more if the backlog gets large
* wait to send if the queue is building rapidly
* tweak
* tweak for chrome since it works great in firefox but chrome cannot handle it
* Revert "tweak for chrome since it works great in firefox but chrome cannot handle it"
This reverts commit 439e2d76b1.
* adjust for chrome
* lower number
* remove code
* fixes
* fast path for bytes
* compact
* adjust test since we see the close right away now on overload
* simplify check
* reduce loop
* tweak
* handle ready right away
* Significantly reduce websocket api connection auth phase latancy
Since the auth phase has exclusive control over the websocket
until ActiveConnection is created, we can bypass the queue and
send messages right away. This reduces the latancy and reconnect
time since we do not have to wait for the background processing
of the queue to send the auth ok message.
* only start the writer queue after auth is successful
* Allow passing binary to the WS connection
* Expand test coverage
* Test non-existing handler
* Allow signaling end of stream using empty payloads
* Store handlers in a list
* Handle binary handlers raising exceptions
Since most of the json serialize work for the websocket was done
multiple times for the same message, we can avoid the overhead
of serializing the same message many times (once per websocket
client) with a cache.
* Remove unnecessary exception re-wraps
* Preserve exception chains on re-raise
We slap "from cause" to almost all possible cases here. In some cases it
could conceivably be better to do "from None" if we really want to hide
the cause. However those should be in the minority, and "from cause"
should be an improvement over the corresponding raise without a "from"
in all cases anyway.
The only case where we raise from None here is in plex, where the
exception for an original invalid SSL cert is not the root cause for
failure to validate a newly fetched one.
Follow local convention on exception variable names if there is a
consistent one, otherwise `err` to match with majority of codebase.
* Fix mistaken re-wrap in homematicip_cloud/hap.py
Missed the difference between HmipConnectionError and
HmipcConnectionError.
* Do not hide original error on plex new cert validation error
Original is not the cause for the new one, but showing old in the
traceback is useful nevertheless.
* Part 1 of 2 (no breaking changes in part 1).
When integrations configured via the UI block startup or fail to start,
the webserver can remain offline which make it is impossible
to recover without manually changing files in
.storage since the UI is not available.
This change is the foundation that part 2 will build on
and enable a listener to start the webserver when the frontend
is finished loading.
Frontend Changes (home-assistant/frontend#6068)
* Address review comments
* bump timeout to 1800s, adjust comment
* bump timeout to 4h
* remove timeout failsafe
* and the test