* Improve recorder and worker thread matching in RecorderPool
Previously we would look at the name of the threads. This
was a brittle if because other integrations may name their
thread Recorder or DbWorker. Instead we now use explict thread
ids which ensures there will never be a conflict
* fix
* fixes
* fixes
* Record state.last_reported
* Include last_reported in parts of the history API
* Use a bulk update
* fix refactoring error
---------
Co-authored-by: J. Nick Koston <nick@koston.org>
* Fix recorder ws_info blocking the event loop
Fixes
```
2024-02-15 06:37:55.423 WARNING (MainThread) [asyncio] Executing <Task pending name=websocket_api.async:ws_info coro=<_handle_async_response() running at /usr/src/homeassistant/homeassistant/components/websocket_api/decorators.py:26> wait_for=<_GatheringFuture pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.12/asyncio/tasks.py:712> cb=[set.remove()] created at /usr/src/homeassistant/homeassistant/core.py:653> took 0.332 seconds
```
* no instance did not actually work
Almost 99% of items that are put into the recorder queue are
Events. Avoid wrapping them in tasks since we have to unwrap
them right away and its must faster to check for both RecorderTask
and Events since events are the common case.
* Significantly speed up recorder event listener
This code is called every time an event happens since it
subscribes to all events. Its our most frequently called
listener out of the box.
It used to have a seperate filter function but it was
later combined after core had some previous refactoring.
It was never optimized after that happened.
This change reduces the run time by ~70%
* decruft
* Ensure recorder run shutdown if the run loop raises
If anything goes wrong with the recorder we should
still try to shutdown cleanly
* tweak
* tests
* tests
* handle migraiton failure
* tweak comment
* naming
* order
* order
* order
* reword
* adjust test
* fixes
* threading
* failure case
* fix test
* have to wait for stop because the task blocks on thread join
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* not working as expected
* override the type compiler
* override the type compiler
* override the type compiler
* override the type compiler
* Apply suggestions from code review
* pgsql char1
* make entity filter test setup with old schema
* fix some more tests that were mutating state
* fix some more tests that were mutating state
* fix some more tests that were mutating state
* fix more dbstate mutations
* add shim for older tests
* split migration tests
* add coverage for purging legacy data
* tweak
* more fixes
* drop some legacy
* fix another test
* fix a few more
* add casts for postgresql in case someone deletes the schema changes table
* dry
* dry
* dry
* Restart entity id post migration after a restart
If the entity migration finished and Home Assistant was
restarted during the post migration it would never be resumed
which means the old index and space would never be recovered
* add migration resume test
* Allow passing an optional name to async_track_time_interval
This is the same idea as passing a name to asyncio.create_task which
makes it easier to track down bugs
* more
* short
* still cannot find it
* add a few more
* test
* Fix cpu thrashing during purge after all legacy events were removed
We now remove the the index of of event ids on the states table when its
all NULLs to save space. The purge path needs to avoid checking for legacy
rows to purge if the index has been removed since it will result in a full
table scan each purge cycle that will always find no legacy rows to purge
* one more place
* drop the key constraint as well
* fixes
* more sqlite
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* remove exception catch since the threading.excepthook will actually catch this in production
* fix a few missed ones
* threadsafe
* Update homeassistant/components/recorder/table_managers/statistics_meta.py
* coverage and optimistic caching