* Handle statistics columns being unmigrated from previous downgrades
If the user downgraded HA from 2023.3.x to an older version without
restoring the database and they upgrade again with the same database
they will have unmigrated statistics columns since we only migrate them
once.
As its expensive to check, we do not want to check every time
at startup, so we will only do this one more time since the
risk that someone will downgrade to an older version is very
low at this point.
* add guard to sqlite to prevent re-migrate
* test
* move test to insert with old schema
* use helper
* normalize timestamps
* remove
* add check
* add fallback migration
* add fallback migration
* commit
* remove useless logging
* remove useless logging
* do the other columns at the same time
* coverage
* dry
* comment
* Update tests/components/recorder/test_migration_from_schema_32.py
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* not working as expected
* override the type compiler
* override the type compiler
* override the type compiler
* override the type compiler
* Apply suggestions from code review
* pgsql char1
* make entity filter test setup with old schema
* fix some more tests that were mutating state
* fix some more tests that were mutating state
* fix some more tests that were mutating state
* fix more dbstate mutations
* add shim for older tests
* split migration tests
* add coverage for purging legacy data
* tweak
* more fixes
* drop some legacy
* fix another test
* fix a few more
* add casts for postgresql in case someone deletes the schema changes table
* dry
* dry
* dry
* Restart entity id post migration after a restart
If the entity migration finished and Home Assistant was
restarted during the post migration it would never be resumed
which means the old index and space would never be recovered
* add migration resume test
Set unique on StatesMeta and EventTypes
These should have been marked unique originally to prevent
collision bugs from going unnoticed. These have not been
to beta yet so this is not a breaking change
Drop duplicated indices from schema
https://docs.percona.com/percona-toolkit/pt-duplicate-key-checker.html
```
% pt-duplicate-key-checker --databases fresh
ALTER TABLE `fresh`.`events` DROP INDEX `ix_events_event_type_id`;
ALTER TABLE `fresh`.`states` DROP INDEX `ix_states_metadata_id`;
ALTER TABLE `fresh`.`statistics` DROP INDEX `ix_statistics_metadata_id`;
ALTER TABLE `fresh`.`statistics_short_term` DROP INDEX `ix_statistics_short_term_metadata_id`;
```
* Deduplicate event_types in the events table
* Deduplicate event_types in the events table
* more fixes
* adjust
* adjust
* fix product
* fix tests
* adjust
* migrate
* migrate
* migrate
* more test fixes
* more test fixes
* fix
* migration test
* adjust
* speed up
* fix index
* fix more tests
* handle db failure
* preload
* tweak
* adjust
* fix stale docs strings, remove dead code
* refactor
* fix slow tests
* coverage
* self join to resolve query performance
* fix typo
* no need for quiet
* no need to drop index already dropped
* remove index that will never be used
* drop index sooner as we no longer use it
* Revert "remove index that will never be used"
This reverts commit 461aad2c52.
* typo
Ensure new tables are created using InnoDB
InnoDB is the only supported engine to use with MariaDB
or MySQL as we currently have large keys in the states
table that will not work with MyIASM. Other storage
engines including Aria will likely work fine, but they
are not officially supported.
* Load pending state attributes and event data ids at startup
Since we queue all events to be processed after startup
we can have a thundering herd of queries to prime the
LRUs of event data and state attributes ids. Since we
know we are about to process a chunk of events we can
fetch all the ids in two queries
* lru
* fix hang
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* cleanup
* Remove default from created statistics schema
We were still inserting created times because even though
None was passed when creating the object explictly, the
default would still be used
* adjust column
* preserve original pre sql alc 2.0 behavior
* Add JSON type definitions
* Sample use
* Keep mutable for a follo-up PR (avoid dead code)
* Use list/dict
* Remove JsonObjectType
* Remove reference to Union
* Cleanup
* Improve rest
* Rename json_dict => json_data
* Add docstring
* Add type hint to json_loads
* Add cast
* Move type alias to json helpers
* Cleanup
* Create and use json_loads_object
* Make error more explicit and add tests
* Use JsonObjectType in conversation
* Remove quotes