* Speed up logbook and history queries where ORM rows are not needed
This avoids having sqlalchemy wrap Result in ChunkedIteratorResult
which has additional overhead we do not need for these cases
* more places
* anything that uses _sorted_statistics_to_dict does not need orm rows either
https://www.sqlite.org/pragma.html#pragma_optimize
> To achieve the best long-term query performance without the need to do a detailed engineering analysis of the application schema and SQL, it is recommended that applications run "PRAGMA optimize" (with no arguments) just before closing each database connection. Long-running applications might also benefit from setting a timer to run "PRAGMA optimize" every few hours.
> This pragma is usually a no-op or nearly so and is very fast.
Since we keep the recorder connection open for the entire time HA
is running we fall into the long-running application bucket
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* not working as expected
* override the type compiler
* override the type compiler
* override the type compiler
* override the type compiler
* Apply suggestions from code review
* pgsql char1
* make entity filter test setup with old schema
* fix some more tests that were mutating state
* fix some more tests that were mutating state
* fix some more tests that were mutating state
* fix more dbstate mutations
* add shim for older tests
* split migration tests
* add coverage for purging legacy data
* tweak
* more fixes
* drop some legacy
* fix another test
* fix a few more
* add casts for postgresql in case someone deletes the schema changes table
* dry
* dry
* dry
* Load pending state attributes and event data ids at startup
Since we queue all events to be processed after startup
we can have a thundering herd of queries to prime the
LRUs of event data and state attributes ids. Since we
know we are about to process a chunk of events we can
fetch all the ids in two queries
* lru
* fix hang
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* cleanup
* Mark PostgreSQL range select as fast
Currently we were using the slow range select workaround for
PostgreSQL that was original developed for MariaDB but
its actually slower on PostgreSQ
fixes#83253
* Mark PostgreSQL range select as fast
Currently we were using the slow range select workaround for
PostgreSQL that was original developed for MariaDB but
its actually slower on PostgreSQ
fixes#83253
* Add helper to calculate statistic period start and end
* Don't parse values in resolve_period
* Add specific test for resolve_period
* Improve typing
* Move to recorder/util.py
* Extract period schema
* Separate recorder database schema from other classes
* fix logbook imports
* migrate new tests
* few more
* last one
* fix merge
Co-authored-by: J. Nick Koston <nick@koston.org>
* Use ciso8601 for parsing MySQLDB datetimes
The default parser is this:
5340191feb/MySQLdb/times.py (L66)
* tweak
* tweak
* add coverage for building the MySQLdb connect conv param