* Set lock status correctly for Schlage BE469 Z-Wave locks
PR #17386 attempted to improve the state of z-wave lock tracking for
some problematic models. However, it operated under a flawed
assumptions. Namely, that we can always trust `self.values` to have
fresh data, and that the Schlage BE469 sends alarm reports after every
lock event. We can't trust `self.values`, and the Schlage is very
broken. :)
When we receive a notification from the driver about a state change,
we call `update_properties` - but we can (and do!) have _stale_
properties left over from previous updates. #17386 really works best
if you start from a clean slate each time. However, `update_properties`
is called on every value update, and we don't get a reason why.
Moreover, values that weren't just refreshed are not removed. So blindly
looking at something like `self.values.access_control` when deciding to
apply a workaround is not going to always be correct - it may or may not
be, depending on what happened in the past.
For the sad case of the BE469, here are the Z-Wave events that happen
under various circumstances:
RF Lock / Unlock:
- Send: door lock command set
- Receive: door lock report
- Send: door lock command get
- Receive: door lock report
Manual lock / Unlock:
- Receive: alarm
- Send: door lock command get
- Receive: door lock report
Keypad lock / Unlock:
- Receive: alarm
- Send: door lock command get
- Receive: door lock report
Thus, this PR introduces yet another work around - we track the current
and last z-wave command that the driver saw, and make assumptions based
on the sequence of events. This seems to be the most reliable way to go
- simply asking the driver to refresh various states doesn't clear out
alarms the way you would expect; this model doesn't support the access
control logging commands; and trying to manually clear out alarm state
when calling RF lock/unlock was tricky.
The lock state, when the z-wave network restarts, may look out of sync
for a few minutes. However, after the full network restart is complete,
everything looks good in my testing.
* Fix linter
* Automatically detect if ipv4/ipv6 is used for cert_expiry
Fixes#18818
Python sockets use ipv4 per default. If the domain which should be checked
only has an ipv6 record, socket creation errors out with
`[Errno -2] Name or service not known`
This fix tries to guess the protocol family and creates the socket
with the correct family type
* Fix line length violation
* Fix unavailable
Change setting to unavailable when getting request exceptions only after 1 minute has past since 1st occurrence.
* Put common code in _check_state_available
Put common code to determine if available should be set to False in method _check_state_available
* Unique ID support
New unique ID support, based on hub's serial number and device's permanent ID
* Fixes, showing attributes
Minor fixes
Showing room, hub, fibaro_id for easier mapping and finding of devices
* Update fibaro.py
* Add support for multiple RainMachine controllers
* Member comments
* Member comments
* Member comments
* Cleanup
* More config flow cleanup
* Member comments
This bumps to the new version of the waterfurnace API. In the new
version the unit id is no longer manually set by the user, instead it
is retrieved from the service after login. This is less error prone as
it turns out discovering the correct unit id is hard from an end user
perspective.
Breaking change on the config, as the unit parameter is removed from
config. However I believe the number of users is very low (possibly
only 2), so adaptation should be easy.
This bumps to the new version of the waterfurnace API. In the new
version the unit id is no longer manually set by the user, instead it
is retrieved from the service after login. This is less error prone as
it turns out discovering the correct unit id is hard from an end user
perspective.
Breaking change on the config, as the unit parameter is removed from
config. However I believe the number of users is very low (possibly
only 2), so adaptation should be easy.
After six months the chevy website finally has been reimplemented to
something that seems to work and is stable. The backend library has
been updated thanks to upstream help, and now is working again.