* Handle statistics columns being unmigrated from previous downgrades
If the user downgraded HA from 2023.3.x to an older version without
restoring the database and they upgrade again with the same database
they will have unmigrated statistics columns since we only migrate them
once.
As its expensive to check, we do not want to check every time
at startup, so we will only do this one more time since the
risk that someone will downgrade to an older version is very
low at this point.
* add guard to sqlite to prevent re-migrate
* test
* move test to insert with old schema
* use helper
* normalize timestamps
* remove
* add check
* add fallback migration
* add fallback migration
* commit
* remove useless logging
* remove useless logging
* do the other columns at the same time
* coverage
* dry
* comment
* Update tests/components/recorder/test_migration_from_schema_32.py
* Fallback to generating a new ULID on migraiton if context is missing or invalid
It was discovered that postgresql will do a full scan if
there is a low cardinality on the index because of missing
context ids. We will now generate a ULID for the timestamp
of the row if the context data is missing or invalid
fixes#91514
* tests
* tweak
* tweak
* preen
* Deduplicate event_types in the events table
* Deduplicate event_types in the events table
* more fixes
* adjust
* adjust
* fix product
* fix tests
* adjust
* migrate
* migrate
* migrate
* more test fixes
* more test fixes
* fix
* migration test
* adjust
* speed up
* fix index
* fix more tests
* handle db failure
* preload
* tweak
* adjust
* fix stale docs strings, remove dead code
* refactor
* fix slow tests
* coverage
* self join to resolve query performance
* fix typo
* no need for quiet
* no need to drop index already dropped
* remove index that will never be used
* drop index sooner as we no longer use it
* Revert "remove index that will never be used"
This reverts commit 461aad2c52.
* typo
* Load pending state attributes and event data ids at startup
Since we queue all events to be processed after startup
we can have a thundering herd of queries to prime the
LRUs of event data and state attributes ids. Since we
know we are about to process a chunk of events we can
fetch all the ids in two queries
* lru
* fix hang
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* cleanup
* Separate recorder database schema from other classes
* fix logbook imports
* migrate new tests
* few more
* last one
* fix merge
Co-authored-by: J. Nick Koston <nick@koston.org>