Set unique on StatesMeta and EventTypes
These should have been marked unique originally to prevent
collision bugs from going unnoticed. These have not been
to beta yet so this is not a breaking change
Drop duplicated indices from schema
https://docs.percona.com/percona-toolkit/pt-duplicate-key-checker.html
```
% pt-duplicate-key-checker --databases fresh
ALTER TABLE `fresh`.`events` DROP INDEX `ix_events_event_type_id`;
ALTER TABLE `fresh`.`states` DROP INDEX `ix_states_metadata_id`;
ALTER TABLE `fresh`.`statistics` DROP INDEX `ix_statistics_metadata_id`;
ALTER TABLE `fresh`.`statistics_short_term` DROP INDEX `ix_statistics_short_term_metadata_id`;
```
* Deduplicate event_types in the events table
* Deduplicate event_types in the events table
* more fixes
* adjust
* adjust
* fix product
* fix tests
* adjust
* migrate
* migrate
* migrate
* more test fixes
* more test fixes
* fix
* migration test
* adjust
* speed up
* fix index
* fix more tests
* handle db failure
* preload
* tweak
* adjust
* fix stale docs strings, remove dead code
* refactor
* fix slow tests
* coverage
* self join to resolve query performance
* fix typo
* no need for quiet
* no need to drop index already dropped
* remove index that will never be used
* drop index sooner as we no longer use it
* Revert "remove index that will never be used"
This reverts commit 461aad2c52.
* typo
Ensure new tables are created using InnoDB
InnoDB is the only supported engine to use with MariaDB
or MySQL as we currently have large keys in the states
table that will not work with MyIASM. Other storage
engines including Aria will likely work fine, but they
are not officially supported.
* Load pending state attributes and event data ids at startup
Since we queue all events to be processed after startup
we can have a thundering herd of queries to prime the
LRUs of event data and state attributes ids. Since we
know we are about to process a chunk of events we can
fetch all the ids in two queries
* lru
* fix hang
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* cleanup
* Remove default from created statistics schema
We were still inserting created times because even though
None was passed when creating the object explictly, the
default would still be used
* adjust column
* preserve original pre sql alc 2.0 behavior
* Add JSON type definitions
* Sample use
* Keep mutable for a follo-up PR (avoid dead code)
* Use list/dict
* Remove JsonObjectType
* Remove reference to Union
* Cleanup
* Improve rest
* Rename json_dict => json_data
* Add docstring
* Add type hint to json_loads
* Add cast
* Move type alias to json helpers
* Cleanup
* Create and use json_loads_object
* Make error more explicit and add tests
* Use JsonObjectType in conversation
* Remove quotes
- These were using orjson directly, its a bit cleaner
to use the helper so everything is easier to adjust
in the future if we need to change anything about
the loading
* Initial orjson support take 2
Still need to work out problem building wheels
--
Redux of #72754 / #32153 Now possible since the following is solved:
ijl/orjson#220 (comment)
This implements orjson where we use our default encoder. This does not implement orjson where `ExtendedJSONEncoder` is used as these areas tend to be called far less frequently. If its desired, this could be done in a followup, but it seemed like a case of diminishing returns (except maybe for large diagnostics files, or traces, but those are not expected to be downloaded frequently).
Areas where this makes a perceptible difference:
- Anything that subscribes to entities (Initial subscribe_entities payload)
- Initial download of registries on first connection / restore
- History queries
- Saving states to the database
- Large logbook queries
- Anything that subscribes to events (appdaemon)
Cavets:
orjson supports serializing dataclasses natively (and much faster) which
eliminates the need to implement `as_dict` in many places
when the data is already in a dataclass. This works
well as long as all the data in the dataclass can also
be serialized. I audited all places where we have an `as_dict`
for a dataclass and found only backups needs to be adjusted (support for `Path` needed to be added for backups). I was a little bit worried about `SensorExtraStoredData` with `Decimal` but it all seems to work out from since it converts it before it gets to the json encoding cc @dgomes
If it turns out to be a problem we can disable this
with option |= [orjson.OPT_PASSTHROUGH_DATACLASS](https://github.com/ijl/orjson#opt_passthrough_dataclass) and it
will fallback to `as_dict`
Its quite impressive for history queries
<img width="1271" alt="Screen_Shot_2022-05-30_at_23_46_30" src="https://user-images.githubusercontent.com/663432/171145699-661ad9db-d91d-4b2d-9c1a-9d7866c03a73.png">
* use for views as well
* handle UnicodeEncodeError
* tweak
* DRY
* DRY
* not needed
* fix tests
* Update tests/components/http/test_view.py
* Update tests/components/http/test_view.py
* black
* templates
* Separate recorder database schema from other classes
* fix logbook imports
* migrate new tests
* few more
* last one
* fix merge
Co-authored-by: J. Nick Koston <nick@koston.org>