Categories
Home Assistant Home Automation Python

Ultra efficient “air conditioner” (fans controlled with Home Assistant and Python) using cold outside air

Just getting this up as a draft now for a Reddit user.

In short, this Python script reads the temperatures of two different sensors (outside from an The Ambient Weather WS-2902C weather station and in our master bedroom with a Govee Bluetooth Thermometer), the temperature set point for a generic thermostat entity, does some logic, and turns a switch on or off, all with the Home Assistant API. The switch control two basic box fans that are set to blow air into our bedroom from outside. It runs every X minutes (currently set to 5). This method works great if nighttime temperatures drop below 70F before bedtime. We like the bedroom temp at 66F, so unless it gets below 70F by around 9PM, it probably won’t cool enough for us to be comfortable enough to fall asleep. My wife wakes up at 5:45am, me at 6:30am, pending what our 22 month old daughter thinks of that schedule, so we typically aim to be asleep by 10pm.

Today, 2022-05-14, was the day I got out the window AC. It will be in place for the rest of the summer.

Requirements:

  • A working Home Assistant installation (mine is Python venv install in a Ubuntu VM)
  • A bearer token authorization code for a/your Home Assistant user
  • A working MQTT installation
  • A switch controllable by Home Assistant
  • 1-2 box fans plugged into said switch controlled by Home Assistant

Here is what the Home Assistant control screen looks like. The buttons should be self explanatory. The generic thermostat entity doesn’t need to be on/active for this to work. It uses the set point for control purposes (set to 67.0F in the screenshot).

Home Assistant control screen for ultra efficient air conditioner system

I believe there is currently a logic bug with max cool not respecting the delta_temp variable. Other than that it works perfect. Below is a screenshot of the last 7 days showing the room cooling off nicely to the setpoint of 66F on nights 1-3 and 67F on nights 4-7. Switching a control device on and off every so often is a version of a bang-bang controller. If you look closely, you will notice that each cycle on and off results in a greater temperature drop, which is due to the colder outside air being blown in for the same duration regardless of delta T.

Master bedroom temperature with Home Assistant controlled fans blowing in cold air from outside. Setpoint was 66F for evening of 5/7-5/9 (first 3 nights) and 67F for the rest (next 4 nights).
Zoomed in view of the evening of 5/9 to the morning of 5/10. The temperature drops quickly to the setpoint of 66F and does not go much above. Not sure what the spike is right after 23:00. The outside temp starts at 65F for this same timeframe, dropping to 60 at 21:00 and 55 at 22:00, so a great night to use cold outside air for cooling (thus the “ultra efficient AC”.
import json
import datetime
import time
from dateutil import parser
from requests import get, post
import paho.mqtt.client as mqttClient
import logging

logging.basicConfig(
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.DEBUG)

loggers_to_set_to_warning = ['urllib3.connectionpool']
for l in loggers_to_set_to_warning:
    logging.getLogger(l).setLevel(logging.WARNING)

delta_temp = 3.0
mqtt_host = "mqtt.example.com"
mqtt_port = 1883

base_url = "http://ha.example.come:8123/"
states_url = base_url + "api/states/"
switch_url = base_url + "api/services/switch"
bearer_token = "ey...Vw"
full_bearer_token = "Bearer " + bearer_token
request_headers = {
    "Authorization": full_bearer_token,
    "content-type": "application/json"
}
endpoints = ["climate.masterbedfancooling",
             "sensor.real_outside_temp"]
fan_switch_entity_id = "switch.fan_switch"
states = {}
climate_topic = "climate/fan_control_state"
current_state = "off"
last_state = None
desired_seconds_to_sleep = 300
max_cool_outdoor_temp_limit = 66
desired_temp = None
outside_temp = None
current_temp = None
next_fan_action_time = datetime.datetime.now()


def set_fan_switch_state(state):
    new_fan_state = None
    if state == "on":
        new_fan_state = "on"
    elif state == "off":
        new_fan_state = "off"
    else:
        logging.warn("requested fan state unknown")
        return

    entity_info = {"entity_id": fan_switch_entity_id}
    full_url = switch_url + "/turn_" + new_fan_state
    response = post(full_url, headers=request_headers,
                    data=json.dumps(entity_info))
    if response.status_code != 200:
        logging.error(
            f"attempted to set fan state to {new_fan_state} but encountered error with status code: {response.status_code}")
    else:
        logging.info(f"successfully set fan state to {new_fan_state}")


def set_state_from_mqtt_message(message):
    global current_state, next_fan_action_time
    if message == "max_cool":
        current_state = "max_cool"
    elif message == "normal_cool":
        current_state = "normal_cool"
    elif message == "off":
        current_state = "off"
    elif message == "on":
        current_state = "on"
    else:
        logging.error(f"unable to determine state, setting to off")
        current_state = "off"
    logging.info(f"current_state set to: {current_state}")
    next_fan_action_time = datetime.datetime.now()


def connect_mqtt():
    # Set Connecting Client ID
    client = mqttClient.Client("python_window_fan_control")
    #client.username_pw_set(username, password)
    client.on_connect = on_connect
    client.connect(mqtt_host, mqtt_port)
    return client


def on_connect(client, userdata, flags, rc):
    if rc == 0:
        logging.info("Connected to MQTT Broker!")
    else:
        logging.info("Failed to connect, return code %d\n", rc)


def on_message(client, userdata, msg):
    logging.info(f"Received `{msg.payload.decode()}` from `{msg.topic}` topic")
    set_state_from_mqtt_message(msg.payload.decode())


def on_subscribe(client, userdata, mid, granted_qos):
    logging.info(f"subscribed to topic")


def set_fan_state(state):
    if state == "on":
        logging.info(f"setting fan state to on")
        set_fan_switch_state("on")
    elif state == "off":
        logging.info(f"setting fan state to off")
        set_fan_switch_state("off")


def get_and_set_temperatures():
    global desired_temp, current_temp, outside_temp
    logging.debug("executing loop")

    for endpoint in endpoints:
        full_url = states_url + endpoint
        response = get(full_url, headers=request_headers)
        parsed_json = json.loads(response.text)
        entity = parsed_json['entity_id']
        hvac_action = ""
        if endpoint == 'climate.masterbedfancooling':
            desired_temp = float(
                parsed_json['attributes']['temperature'])
            current_temp = float(
                parsed_json['attributes']['current_temperature'])
            hvac_action = parsed_json['attributes']['hvac_action']
        elif endpoint == 'sensor.real_outside_temp':
            outside_temp = float(parsed_json['state'])
        last_updated = parser.parse(parsed_json['last_updated'])
    logging.info(
        f"temps: current={current_temp}, desired={desired_temp}, outside={outside_temp}")
    if desired_temp == None or current_temp == None or outside_temp == None:
        logging.error(
            "one or more temps invalid, turning off switch and breaking execution")
        set_fan_state("off")


client = mqttClient.Client("window-fan-client")
client.on_connect = on_connect
client.on_message = on_message
client.on_subscribe = on_subscribe
client.connect(mqtt_host, mqtt_port)
client.subscribe(climate_topic)
client.loop_start()

while(True):
    # logging.info("loop")
    if datetime.datetime.now() > next_fan_action_time:
        logging.info("fan action time")
        if current_state == "off":
            new_fan_state = "off"
            logging.info(
                f"current_state is {current_state}, turning fan {new_fan_state}")
            set_fan_state("off")
        else:
            get_and_set_temperatures()

        if current_state == "max_cool":
            logging.info(
                f"current_state is {current_state}, call for max cooling")
            if outside_temp < max_cool_outdoor_temp_limit:
                logging.info(
                    f"able to max_cool with outside temp: {outside_temp}, lower than {max_cool_outdoor_temp_limit} ")
                set_fan_state("on")
            elif outside_temp < (current_temp - delta_temp):
                logging.info(
                    f"unable to max cool, but still can cool with outside: {outside_temp} and inside: {current_temp}")
                set_fan_state("on")
            else:
                logging.info("unable to cool at all, turning fan off")
                set_fan_state("off")
        elif current_state == "normal_cool":
            logging.info(
                f"current_state is {current_state}")
            if (current_temp > desired_temp):
                logging.info(
                    f"call for cooling. current: {current_temp}, desired: {desired_temp}")
                if (current_temp > (outside_temp - delta_temp)):
                    logging.info("can cool, turning fan on")
                    set_fan_state("on")
                else:
                    logging.info("can't cool, turning fan off")
                    set_fan_state("off")
            else:
                logging.info("no need for cooling, turning fans off")
                set_fan_state("off")
        elif current_state == "on":
            new_fan_state = "on"
            logging.info(
                f"current_state is {current_state}, turning fan {new_fan_state}")
            set_fan_state("on")

        last_state = current_state
        next_fan_action_time = datetime.datetime.now() \
            + datetime.timedelta(seconds=desired_seconds_to_sleep)
        logging.info(f"next fan action in {desired_seconds_to_sleep} seconds")
        logging.info("---------loop end-------------------")
    else:
        #logging.debug("not fan action time yet, sleeping 1s")
        time.sleep(0.25)
    # actual_seconds_to_sleep = desired_seconds_to_sleep - datetime.datetime.now().minute % desired_seconds_to_sleep
    # seconds_to_sleep = actual_minutes_to_sleep * 60.0
    # logging.info(f"sleeping {desired_seconds_to_sleep}s")
    # time.sleep(desired_seconds_to_sleep)
Categories
Ansible homelab Kubernetes Linux proxmox Terraform

Deploying a Kubernetes Cluster within Proxmox using Ansible

Introduction / Background

This post has been a long time coming. I apologize for how long it’s taken. I noticed that many other blogs left off at a similar position as I did. Get the VMs created then…. nothing. Creating a Kubernetes cluster locally is a much cheaper (read: basically free) option to learn how Kubes works compared to a cloud-hosted solution or a full-blown Kubernetes engine/solution, such as AWS Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or Google Kubernetes Engine (GKE).

Anyways, I finally had some time to complete the tutorial series so here we are. Since the last post, my wife and I are now expecting our 2nd kid, I put up a new solar panel array, built our 1st kid a new bed, messed around with MacOS Monterey on Proxmox, built garden boxes, and a bunch of other stuff. Life happens. So without much more delay let’s jump back in.

Here’s a screenshot of the end state Kubernetes Dashboard showing our nodes:

ur Proxmox VM nodes deployed via Terraform
Kubernetes Dashboard showing our Proxmox VM nodes deployed via Terraform

Current State

If you’ve followed the blog series so far, you should have four VMs in your Proxmox cluster ready to go with SSH keys set, the hard drive expanded, and the right amount of vCPUs and memory allocated. If you don’t have those ready to go, take a step back (Deploying Kubernetes VMs in Proxmox with Terraform) and get caught up. We’re not going to use the storage VM. Some guides I followed had one but I haven’t found a need for it yet so we’ll skip it.

VMs in Proxmox ready for Kubernetes installation

Ansible

What is Ansible

If you ask DuckDuckGo to define ansible, it will tell you the following: “A hypothetical device that enables users to communicate instantaneously across great distances; that is, a faster-than-light communication device.”

In our case, it is “a open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code.”

We will thus be using Ansible to run the initial Kubernetes set up steps on every machine, initialize the cluster on the master, and join the cluster on the workers/agents.

Initial Ansible Housekeeping

First we need to specify some variables similar to how we did it with Terraform. Create a file in your working directory called ansible-vars.yml and put the following into it:

# specifying a CIDR for our cluster to use.
# can be basically any private range except for ranges already in use.
# apparently it isn't too hard to run out of IPs in a /24, so we're using a /22
pod_cidr: "10.16.0.0/22"

# this defines what the join command filename will be
join_command_location: "join_command.out"

# setting the home directory for retreiving, saving, and executing files
home_dir: "/home/ubuntu"

Equally as important (and potentially a better starting point than the variables) is defining the hosts. In ansible-hosts.txt:

# this is a basic file putting different hosts into categories
# used by ansible to determine which actions to run on which hosts

[all]
10.98.1.41
10.98.1.51
10.98.1.52

[kube_server]
10.98.1.41

[kube_agents]
10.98.1.51
10.98.1.52

[kube_storage]
#10.98.1.61

Checking Ansible can communicated with our hosts

Let’s pause here and make sure Ansible can communicate with our VMs. We will use a simple built-in module named ‘ping’ to do so. The below command broken down:

  • -i ansible-hosts.txt – use the ansible-hosts.txt file
  • all – run the command against the [all] block from the ansible-hosts.txt file
  • -u ubuntu – log in with user ubuntu (since that’s what we set up with the Ubuntu 20.04 Cloud Init template). if you don’t use -u [user], it will use your currently logged in user to attempt to SSH.
  • -m ping – run the ping module
ansible -i ansible-hosts.txt all -u ubuntu -m ping

If all goes well, you will receive “ping”: “pong” for each of the VMs you have listed in the [all] block of the ansible-hosts.txt file.

Using Ansible’s ping to check communications with each of the VMs for deployment

Potential SSH errors

If you’ve previously SSH’d to these IPs and have subsequently destroyed/re-created them, you will get scary sounding SSH errors about remote host identification has changed. Run the suggested ssh-keygen -f command for each of the IPs to fix it.

You might also have to SSH into each of the hosts to accept the host key. I’ve done this whole procedure a couple times so I don’t recall what will pop up first attempt.

SSH remote host identification has changed error. Run suggested ssh-keygen -f command to resolve.
ssh-keygen -f "/home/<username_here>/.ssh/known_hosts" -R "10.98.1.41"
ssh-keygen -f "/home/<username_here>/.ssh/known_hosts" -R "10.98.1.51"
ssh-keygen -f "/home/<username_here>/.ssh/known_hosts" -R "10.98.1.52"
ssh-keygen -f "/home/<username_here>/.ssh/known_hosts" -R "10.98.1.61"

Installing Kubernetes dependencies with Ansible

Then we need a script to install the dependencies and the Kubernetes utilities themselves. This script does quite a few things. Gets apt ready to install things, adding the Docker & Kubernetes signing key, installing Docker and Kubernetes, disabling swap, and adding the ubuntu user to the Docker group.

ansible-install-kubernetes-dependencies.yml:

# https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/
# https://github.com/virtualelephant/vsphere-kubernetes/blob/master/ansible/cilium-install.yml#L57

# ansible .yml files define what tasks/operations to run

---
- hosts: all # run on the "all" hosts category from ansible-hosts.txt
  # become means be superuser
  become: true
  remote_user: ubuntu
  tasks:
  - name: Install packages that allow apt to be used over HTTPS
    apt:
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
      - apt-transport-https
      - ca-certificates
      - curl
      - gnupg-agent
      - software-properties-common

  - name: Add an apt signing key for Docker
    apt_key:
      url: https://download.docker.com/linux/ubuntu/gpg
      state: present

  - name: Add apt repository for stable version
    apt_repository:
      repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
      state: present

  - name: Install docker and its dependecies
    apt: 
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
      - docker-ce 
      - docker-ce-cli 
      - containerd.io
      
  - name: verify docker installed, enabled, and started
    service:
      name: docker
      state: started
      enabled: yes
      
  - name: Remove swapfile from /etc/fstab
    mount:
      name: "{{ item }}"
      fstype: swap
      state: absent
    with_items:
      - swap
      - none

  - name: Disable swap
    command: swapoff -a
    when: ansible_swaptotal_mb >= 0
    
  - name: Add an apt signing key for Kubernetes
    apt_key:
      url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
      state: present

  - name: Adding apt repository for Kubernetes
    apt_repository:
      repo: deb https://apt.kubernetes.io/ kubernetes-xenial main
      state: present
      filename: kubernetes.list

  - name: Install Kubernetes binaries
    apt: 
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
        # it is usually recommended to specify which version you want to install
        - kubelet=1.23.6-00
        - kubeadm=1.23.6-00
        - kubectl=1.23.6-00
        
  - name: hold kubernetes binary versions (prevent from being updated)
    dpkg_selections:
      name: "{{ item }}"
      selection: hold
    loop:
      - kubelet
      - kubeadm
      - kubectl
        
# this has to do with nodes having different internal/external/mgmt IPs
# {{ node_ip }} comes from vagrant, which I'm not using yet
#  - name: Configure node ip - 
#    lineinfile:
#      path: /etc/default/kubelet
#      line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}

  - name: Restart kubelet
    service:
      name: kubelet
      daemon_reload: yes
      state: restarted
      
  - name: add ubuntu user to docker
    user:
      name: ubuntu
      group: docker
  
  - name: reboot to apply swap disable
    reboot:
      reboot_timeout: 180 #allow 3 minutes for reboot to happen

With our fresh VMs straight outta Terraform, let’s now run the Ansible script to install the dependencies.

Ansible command to run the Kubernetes dependency playbook (pretty straight-forward: the -i is to input the hosts file, then the next argument is the playbook file itself):

ansible-playbook -i ansible-hosts.txt ansible-install-kubernetes-dependencies.yml

It’ll take a bit of time to run (1m26s in my case). If all goes well, you will be presented with a summary screen (called PLAY RECAP) showing some items in green with status ok and some items in orange with status changed. I got 13 ok’s, 10 changed’s, and 1 skipped.

Ansible play recap showing successful Kubernetes dependencies installation

Initialize the Kubernetes cluster on the master

With the dependencies installed, we can now proceed to initialize the Kubernetes cluster itself on the server/master machine. This script sets docker to use systemd cgroups driver (don’t recall what the alternative is at the moment but this was the easiest of the alternatives), initializes the cluster, copies the cluster files to the ubuntu user’s home directory, installs Calico networking plugin, and the standard Kubernetes dashboard.

ansible-init-cluster.yml:

- hosts: kube_server
  become: true
  remote_user: ubuntu
  
  vars_files:
    - ansible-vars.yml
    
  tasks:
  - name: set docker to use systemd cgroups driver
    copy:
      dest: "/etc/docker/daemon.json"
      content: |
        {
          "exec-opts": ["native.cgroupdriver=systemd"]
        }
  - name: restart docker
    service:
      name: docker
      state: restarted
    
  - name: Initialize Kubernetes cluster
    command: "kubeadm init --pod-network-cidr {{ pod_cidr }}"
    args:
      creates: /etc/kubernetes/admin.conf # skip this task if the file already exists
    register: kube_init
    
  - name: show kube init info
    debug:
      var: kube_init
      
  - name: Create .kube directory in user home
    file:
      path: "{{ home_dir }}/.kube"
      state: directory
      owner: 1000
      group: 1000

  - name: Configure .kube/config files in user home
    copy:
      src: /etc/kubernetes/admin.conf
      dest: "{{ home_dir }}/.kube/config"
      remote_src: yes
      owner: 1000
      group: 1000
      
  - name: restart kubelet for config changes
    service:
      name: kubelet
      state: restarted
      
  - name: get calico networking
    get_url:
      url: https://projectcalico.docs.tigera.io/manifests/calico.yaml
      dest: "{{ home_dir }}/calico.yaml"
      
  - name: apply calico networking
    become: no
    command: kubectl apply -f "{{ home_dir }}/calico.yaml"
    
  - name: get dashboard
    get_url:
      url: https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
      dest: "{{ home_dir }}/dashboard.yaml"
    
  - name: apply dashboard
    become: no
    command: kubectl apply -f "{{ home_dir }}/dashboard.yaml"

Initializing the cluster took 53s on my machine. One of the first tasks is to download the images which takes the majority of the duration. You should get 13 ok and 10 changed with the init. I had two extra user check tasks because I was fighting some issues with applying the Calico networking.

ansible-playbook -i ansible-hosts.txt ansible-init-cluster.yml
Successful Kubernetes init execution showing join token at the bottom

Getting the join command and joining worker nodes

With the master up and running, we need to retrieve the join command. I chose to save the command locally and read the file in a subsequent Ansible playbook. This could certainly be combined into a single playbook.

ansible-get-join-command.yaml –

- hosts: kube_server
  become: false
  remote_user: ubuntu
  
  vars_files:
    - ansible-vars.yml
    
  tasks:
  - name: Extract the join command
    become: true
    command: "kubeadm token create --print-join-command"
    register: join_command
    
  - name: show join command
    debug:
      var: join_command
      
  - name: Save kubeadm join command for cluster
    local_action: copy content={{ join_command.stdout_lines | last | trim }} dest={{ join_command_location }} # defaults to your local cwd/join_command.out

And for the command:

ansible-playbook -i ansible-hosts.txt ansible-get-join-command.yml
Successfully retrieved the join command and saved it to the local machine

Now to join the workers/agents, our Ansible playbook will read that join_command.out file and use it to join the cluster.

ansible-join-workers.yml –

- hosts: kube_agents
  become: true
  remote_user: ubuntu
  
  vars_files:
    - ansible-vars.yml
    
  tasks:
  - name: set docker to use systemd cgroups driver
    copy:
      dest: "/etc/docker/daemon.json"
      content: |
        {
          "exec-opts": ["native.cgroupdriver=systemd"]
        }
  - name: restart docker
    service:
      name: docker
      state: restarted
    
  - name: read join command
    debug: msg={{ lookup('file', join_command_location) }}
    register: join_command_local
    
  - name: show join command
    debug:
      var: join_command_local.msg
      
  - name: join agents to cluster
    command: "{{ join_command_local.msg }}"

And to actually join:

ansible-playbook -i ansible-hosts.txt ansible-join-workers.yml
Two worker agents successfully joined to the cluster

With the two worker nodes/agents joined up to the cluster, you now have a full on Kubernetes cluster up and running! Wait a few minutes, then log into the server and run kubectl get nodes to verify they are present and active (status = Ready):

kubectl get nodes
‘kubectl get nodes’ showing our nodes as ready

Kubernetes Dashboard

Everyone likes a dashboard. Kubernetes has a good one for poking/prodding around. It appears to basically be a visual representation of most (all?) of the “get information” types of command you can run with kubectl (kubectl get nodes, get pods, describe stuff, etc.).

The dashboard was installed with the cluster init script but we still need to create a service account and cluster role binding for the dashboard. These steps are from https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md. NOTE: the docs state it is not recommended to give admin privileges to this service account. I’m still figuring out Kubernetes privileges so I’m going to proceed anyways.

Dashboard user/role creation

On the master machine, create a file called sa.yaml with the following contents:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

And another file called clusterrole.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Apply both, then get the token to be used for logging in. The last command will spit out a long string. Copy it starting at ‘ey’ and ending before the username (ubuntu). In the screenshot I have highlighted which part is the token

kubectl apply -f sa.yaml
kubectl apply -f clusterrole.yaml
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
Applying both templates and getting the user’s token

SSH Tunnel & kubectl proxy

At this point, the dashboard has been running for a while. We just can’t get to it yet. There are two distinct steps that need to happen. The first is to create a SSH tunnel between your local machine and a machine in the cluster (we will be using the master). Then, from within that SSH session, we will run kubectl proxy to expose the web services.

SSH command – the master’s IP is 10.98.1.41 in this example:

ssh -L 8001:127.0.0.1:8001 [email protected]

The above command will open what appears to be a standard SSH session but the tunnel is running as well. Now execute kubectl proxy:

Kubernetes SSH tunnel & kubectl proxy output

The Kubernetes Dashboard

At this point, you should be able to navigate to the dashboard page from a web browser on your local machine (http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/) and you’ll be prompted for a log in. Make sure the token radio button is selected and paste in that long token from earlier. It expires relatively quickly (couple hours I think) so be ready to run the token retrieval command again.

Kubernetes dashboard login with token

The default view is for the “default” namespace which has nothing in it at this point. Change it to All namespaces for more details:

Kubernetes dashboard all namespaces

From here you can see information about everything in the cluster:

Kubernetes dashboard showing relatively default workloads

Conclusion

With this last post, we have concluded the journey from creating a Ubuntu cloud-init image in Proxmox, using Terraform to deploy Kubernetes VMs in Proxmox, all the way through deploying an actual Kubernetes cluster in Proxmox using Ansible. Hope you found this useful!

Video link coming soon.

Discussion

For discussion, either leave a comment here or if you’re a Reddit user, head on over to https://www.reddit.com/r/austinsnerdythings/comments/ubsk1i/i_made_a_tutorial_showing_how_to_deploy_a/.

References

https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/

https://github.com/virtualelephant/vsphere-kubernetes

Categories
MacOS proxmox Tutorials

How to install MacOS Monterey in a Proxmox 7 VM

I recently installed MacOS Monterey (12.1) in a Proxmox 7 virtual machine and made a YouTube video showing the process – https://youtu.be/HBAPscDD30M.

I followed the instructions on Nick Sherlock’s blog – https://www.nicksherlock.com/2021/10/installing-macos-12-monterey-on-proxmox-7/. He’s pretty good at instructions so I’ll just leave the link.

My main use for MacOS VM is to use Apple’s XCode for basic app development (Swift, SwiftUI, UIKit, React Native, etc.) in a fairly fast environment. I have a slower actual Mac for publishing and such but like the flexibility of working in a virtual environment within Proxmox.

I’m still thinking about writing a script to automate the deployment – keep checking back if you’re interested!

This post mostly serves as a link to the YouTube video for how to install MacOS Monterey in Proxmox.

Here’s a picture of the environment showing my Xeon e5-2678v3 as the processor in the MacOS desktop:

MacOS Monterey running in a Proxmox virtual machine for Xcode
MacOS Monterey running in a Proxmox virtual machine for Xcode

For Part 2, which includes activities such as setting your MacOS Monterey VM to automatically boot, consolidating the OpenCore bootloader disk, reviewing the procedure for activating your Mac VM, please click the following link: https://www.youtube.com/watch?v=oF7n2ejdTPU

Categories
Home Assistant Home Automation

Using the Govee Bluetooth Thermometer with Home Assistant (Python and MQTT)

Introduction

Like many other Home Automation enthusiasts, I have been on the lookout for a cheap thermometer/hygrometer that has either WiFi or Bluetooth connectivity. I think I found the answer in the $12 USD Govee Bluetooth Digital Thermometer and Hygrometer. The fact that the device broadcasts the current temperature and humidity every 2 seconds via low energy Bluetooth (BLE) is the metaphorical icing on the cake. Here is a pic of the unit in our garage:

Govee bluetooth thermometer and hygrometer showing 56*F/24% in a garage

Specifications

This is a pretty basic device. It has a screen that shows the current temperature, humidity, and min/max values. It sends the current readings (along with battery health) every 2 seconds via low-energy bluetooth (BLE). The description on Amazon says it has a “Swiss-made smart hygrometer sensor”. Dunno if I believe that but for $12 it is good enough. The temperature is accurate to +/- 0.54F and humidity is +/- 3% RH. If you use the app, it is apparently possible to read the last 20 days or 2 years of data from the device (I haven’t used the app at all).

Enough with the boring stuff. Let’s get it connected to Home Assistant.

Reading the Govee bluetooth advertisements with Python

I found some sample code on tchen’s GitHub page (link) to help get me going in the right direction.

Without further ado, here is the code I’m using to read the data and publish via MQTT:

observe.py (updated 2022-01-04 with some logging improvements. default logging level is now WARNING, which disables printing every advertisement)

# basic govee bluetooth data reading code for model https://amzn.to/3z14BIi
# written/modified by Austin of austinsnerdythings.com 2021-12-27
# original source: https://gist.github.com/tchen/65d6b29a20dd1ef01b210538143c0bf4
import logging
import json
from time import sleep
from basic_mqtt import basic_mqtt
from bleson import get_provider, Observer

logging.basicConfig(
	format='%(asctime)s.%(msecs)03d - %(name)s - %(levelname)s - %(message)s',
	datefmt='%Y-%m-%d %H:%M:%S',
	level=logging.WARNING)

# I did write all the mqtt stuff. I kept it in a separate class
mqtt = basic_mqtt()
mqtt.connect()

# writing code on my windows computer, committing, pushing, and then 
# pulling the new code on the Raspberry Pi is a tedious "debug" process.
# there are quite a few errors that spit out from bleson, which as far
# as I can tell, isn't super polished. if we set the debug level to
# critical, most of those disappear.
logging.getLogger("bleson").setLevel(logging.CRITICAL)

# basic celsius to fahrenheit function
def c2f(val):
    return round(32 + 9*val/5, 2)

last = {}

# I didn't write this, but it takes the raw BT data and spits out the data of interest
def temp_hum(values, battery, address):
    global last
    #########################
    #
    # there is a fix for temperatures below freezing here - 
    # https://github.com/joshgordon/govee_ble_to_mqtt/blob/master/observe.py
    # I will adjust post code afternoon of 2022-11-28
    #
    #########################
    values = int.from_bytes(values, 'big')
    if address not in last or last[address] != values:
        last[address] = values
        temp = float(values / 10000)
        hum = float((values % 1000) / 10)
        # this print looks like this:
        # 2021-12-27T11:22:17.040469 BDAddress('A4:C1:38:9F:1B:A9') Temp: 45.91 F  Humidity: 25.8 %  Battery: 100 %
        logging.info(f"decoded values: {address} Temp: {c2f(temp)} F  Humidity: {hum} %  Battery: {battery}")
        # this code originally just printed the data, but we need it to publish to mqtt.
        # added the return values to be used elsewhere
        return c2f(temp), hum, battery

def on_advertisement(advertisement):
    #print(advertisement)
    mfg_data = advertisement.mfg_data

    # there are lots of BLE advertisements flying around. only look at ones that have mfg_data
    if mfg_data is not None:
        #print(advertisement)
        # there are a few Govee models and the data is in different positions depending on which
        # the unit of interest isn't either of these hardcoded values, so they are skipped
        if advertisement.name == 'GVH5177_9835':
            address = advertisement.address
            temp_hum(mfg_data[4:7], mfg_data[7], address)
        elif advertisement.name == 'GVH5075_391D':
            address = advertisement.address
            temp_hum(mfg_data[3:6], mfg_data[6], address)
        elif advertisement.name != None:
            # this is where all of the advertisements for our unit of interest will be processed
            address = advertisement.address
            if 'GVH' in advertisement.name:
                #print(advertisement)
                temp_f, hum, battery = temp_hum(mfg_data[3:6], mfg_data[6], address)

                if temp_f > 180.0 or temp_f < -30.0:
                    return
                # as far as I can tell bleson doesn't have a string representation of the MAC address
                # address is of type BDAddress(). str(address) is BDAddress('A4:C1:38:9F:1B:A9')
                # substring 11:-2 is just the MAC address
                mac_addr = str(address)[11:-2]

                # construct dict with relevant info
                msg = {'temp_f':temp_f,
                        'hum':hum,
                        'batt':battery}
                
                # looks like this:
                # msg data: {'temp_f': 45.73, 'hum': 25.5, 'batt': 100}
                logging.info(f"MQTT msg data: {msg}")

                # turn into JSON for publishing
                json_string = json.dumps(msg, indent=4)

                # publish to topic separated by MAC address
                mqtt.publish(f"govee/{mac_addr}", json_string)

# base stuff from the original gist
logging.warning(f"initializing bluetooth")
adapter = get_provider().get_adapter()
observer = Observer(adapter)
observer.on_advertising_data = on_advertisement

logging.warning(f"starting observer")
observer.start()
logging.warning(f"listening for events and publishing to MQTT")
while True:
    # unsure about this loop and how much of a delay works
    sleep(1)
observer.stop()

And for the MQTT helper class (mqtt_helper.py):

# basic MQTT helper class. really needed to write one of these to simplify basic MQTT operations
# written/modified by Austin of austinsnerdythings.com 2021-12-27
# original source: https://gist.github.com/fisherds/f302b253cf7a11c2a0d814acd424b9bb
# filename is basic_mqtt.py
from paho.mqtt import client as mqtt_client
import logging
import datetime
logging.basicConfig(
	format='%(asctime)s.%(msecs)03d - %(name)s - %(levelname)s - %(message)s',
	datefmt='%Y-%m-%d %H:%M:%S',
	level=logging.INFO)


mqtt_host = "mqtt.home.fluffnet.net"
test_topic = "mqtt_test_topic"

# this is really not polished. it was a stream of consciousness project to pound
# something out to do basic MQTT publish stuff in a reusable fashion.
class basic_mqtt:
	def __init__(self):
		self.client = mqtt_client.Client()
		self.subscription_topic_name = None
		self.publish_topic_name = None
		self.callback = None
		self.host = mqtt_host
	
	def connect(self):
		self.client.on_connect = self.on_connect
		self.client.on_subscribe = self.on_subscribe
		self.client.on_message = self.on_message
		logging.info(f"connecting to MQTT broker at {mqtt_host}")
		self.client.connect(host=mqtt_host,keepalive=30)
		self.client.loop_start()

	def on_connect(self, client, userdata, flags, rc):
		print("Connected with result code "+str(rc))

		# Subscribing in on_connect() means that if we lose the connection and
		# reconnect then subscriptions will be renewed.
		#self.client.subscribe("$SYS/#")

	def on_message(self, client, userdata, msg):
		print(f"got message of topic {msg.topic} with payload {msg.payload}")

	def on_subscribe(self, client, userdata, mid, granted_qos):
		print("Subscribed: " + str(mid) + " " + str(granted_qos))

	def publish(self, topic, msg):
		self.client.publish(topic=topic, payload=msg)

	def subscribe_to_test_topic(self, topic=test_topic):
		self.client.subscribe(topic)

	def send_test_message(self, topic=test_topic):

		self.publish(topic=topic, msg=f"test message from python script at {datetime.datetime.now()}")

	def disconnect(self):
		self.client.disconnect()

	def loop(self):
		self.client.loop_forever()

if __name__ == "__main__":
	logging.info("running MQTT test")
	mqtt_helper = basic_mqtt()
	mqtt_helper.connect()
	mqtt_helper.subscribe_to_test_topic()
	mqtt_helper.send_test_message()
	mqtt_helper.disconnect()

Results

Running this script on a Raspberry Pi 3 shows the advertisements coming in as expected. The updates are very quick compared to the usual 16 second update interval for my Acurite stuff.

screenshot of Python code running to receive BLE advertisements from Govee Bluetooth Thermometer
screenshot of Python code running to receive BLE advertisements from Govee Bluetooth Thermometer

And running mosquitto_sub with the right arguments (mosquitto_sub -h mqtt -v -t “govee/#”) shows the MQTT messages are being published as expected:

mosquitto_sub showing our published MQTT messages with temperature/humidity data from the Govee sensor

Getting Govee MQTT data into Home Assistant

Lastly, we need to add a MQTT sensor to get the data importing into Home Assistant:

- platform: mqtt
  state_topic: "govee/A4:C1:38:9F:1B:A9"
  value_template: "{{ value_json.temp_f }}"
  name: "garage temp"
  unit_of_measurement: "F"

And from there you can do whatever you want with the collected data!

Home Assistant displaying data from Govee Bluetooth temperature and humidity sensor

Conclusion

This was a relatively quick post and code development. I really hate the cycle of developing on my Bluetooth-less Windows computer, committing the code, pushing to Git, pulling on the Pi, and running to “debug”. Thus, the code isn’t as good as it can be. I probably did 20-25 iterations before calling it good enough.

Regardless, I think this $12 Govee Bluetooth Thermometer and Hygrometer is a great little tool for collecting data around the house. You don’t need an SDR to get Acurite beacons, and you don’t need to spend a lot either. You just need a way to receive BLE advertisements (basically any Bluetooth-capable device can do this). There is even a 2 pack of just the sensors that I just discovered on Amazon for $24 – 2 pack of Govee bluetooth thermometer and hygrometer. I know there are other Bluetooth devices but they’re quite a bit more expensive.

Disclosure: Some of the links on this post are Amazon affiliate links. This means that, at zero cost to you, I will earn an affiliate commission if you click through the link and finalize a purchase.

Also, at least for me, these are available with same day shipping on Amazon. Scratch that instant gratification itch.

Same day shipping available on Amazon near Denver for the Govee sensors
Categories
Python XPlane

Coding a pitch/roll/altitude autopilot in X-Plane with Python

Introduction

Sorry it has taken me so long to write this post! The last post on the blog was October 19th – almost 6 weeks ago. Life happens. We have a 15 month old running around and she is a handful!

March 2024 update – we now have a 1.5 year old AND the 15 month old is 3.5 years old! Link to the next post in this series – Adding some polish to the X-Plane Python Autopilot with Flask, Redis, and WebSockets

Anyways, back to the current topic – coding a pitch/roll (2 axis) autopilot in X-Plane with Python with altitude and heading hold. Today we will be adding the following:

  • Real-time graphing for 6 parameters
  • Additional method to grab data out of X-Plane
  • A normalize function to limit outputs to reasonable values
  • Altitude preselect and hold function

The full code will be at the end of this post.

Video Link

coming soon

Contents

  1. Adding PyQtGraph
  2. Developing a normalize function
  3. Initializing the data structures to hold the graph values
  4. Defining the PyQtGraph window and parameters
  5. Getting more data points out of X-Plane
  6. Feeding the graph data structures with data
  7. Adding altitude preselect and hold

1 – Adding PyQtGraph

Pip is my preferred tool to manage Python packages. It is easy and works well. I initially tried graphing with MatPlotLib but it took 200-300ms to update, which really dragged down the control loop to the point of being unusable. Instead, we will be using PyQtGraph. Install it with Pip:

pip install pyqtgraph

2 – Developing a normalize function

This task is pretty straightforward. There are a couple places where we want to pass values that need to be within a certain range. The first example is to the client.sendCTRL() method to set the control surfaces in X-Plane. The documentation states values are expected to be from -1 to 1. I have got some really weird results sending values outside that range (specifically for throttle, if you send something like 4, you can end up with 400% throttle which is wayyy more than the engines can actually output).

# this function takes an input and either passes it through or adjusts
# the input to fit within the specified max/min values
def normalize(value, min=-1, max=1):
	# if value = 700, and max = 20, return 20
	# if value = -200, and min = -20, return -20
	if (value > max):
		return max
	elif (value < min):
		return min
	else:
		return value

3 – Initializing the graphing data structures

We need a couple of arrays to store the data for our graphs. We need (desired data to plot) + 1 arrays. The +1 is the x-axis, which will just store values like 0,1,2,3,etc. The others will be the y-values. We haven’t added the altitude stuff yet, so you can add them but they won’t be used yet.

x_axis_counters = [] #0, 1, 2, 3, etc. just basic x-axis values used for plotting
roll_history = []
pitch_history = []
#altitude_history = []
roll_setpoint_history = []
pitch_setpoint_history = []
#altitude_setpoint_history = []
plot_array_max_length = 100 # how many data points to hold in our arrays and graph
i = 1 # initialize x_axis_counter

4 – Defining the PyQtGraph window and parameters

Working with PyQtGraph more or less means we’ll be working with a full blown GUI (just stripped down).

# first the base app needs to be instantiated
app = pg.mkQApp("python xplane autopilot monitor")

# now the window itself is defined and sized
win = pg.GraphicsLayoutWidget(show=True)
win.resize(1000,600) #pixels
win.setWindowTitle("XPlane autopilot system control")

# we have 3 subplots
p1 = win.addPlot(title="roll",row=0,col=0)
p2 = win.addPlot(title="pitch",row=1,col=0)
p3 = win.addPlot(title="altitude", row=2, col=0)

# show the y grid lines to make it easier to interpret the graphs
p1.showGrid(y=True)
p2.showGrid(y=True)
p3.showGrid(y=True)

5 – Getting more data points out of X-Plane

The initial .getPOSI() method that came in the example has worked well for us so far. But at this point we need more data that isn’t available in the .getPOSI() method. We will be utilizing a different method called .getDREFs() which is short for ‘get data references’. We will need to construct a list of data references we want to retrieve, pass that list to the method, and then parse the output. It is more granular than .getPOSI(). I haven’t evaluated the performance but I don’t think it is a problem.

The DREFs we want are for indicated airspeed, magnetic heading (.getPOSI() has true heading, not magnetic), an indicator to show if we are on the ground or not, and height as understood by the flight model. Thus, we can define our DREFs as follows:

DREFs = ["sim/cockpit2/gauges/indicators/airspeed_kts_pilot",
		"sim/cockpit2/gauges/indicators/heading_electric_deg_mag_pilot",
		"sim/flightmodel/failures/onground_any",
		"sim/flightmodel/misc/h_ind"]

And we can get the data with client.getDREFs(DREFs). The returned object is a 2d array. We need to parse out our values of interest. The full data gathering code looks like this:

posi = client.getPOSI();
ctrl = client.getCTRL();
multi_DREFs = client.getDREFs(DREFs)

current_roll = posi[4]
current_pitch = posi[3]
current_hdg = multi_DREFs[1][0]
current_altitude = multi_DREFs[3][0]
current_asi = multi_DREFs[0][0]
onground = multi_DREFs[2][0]

With those data points, we have everything we need to start plotting the state of our aircraft and monitoring for PID tuning.

6 – Feeding the real-time graphs with data

Next up is actually adding data to be plotted. There are two scenarios to consider when adding data to the arrays: 1) the arrays have not yet reached the limit we set earlier (100 points), and 2) they have. Case 1 is easy. We just append the current values to the arrays:

x_axis_counters.append(i)
roll_history.append(current_roll)
roll_setpoint_history.append(desired_roll)
pitch_history.append(current_pitch)
pitch_setpoint_history.append(pitch_PID.SetPoint)
altitude_history.append(current_altitude)
altitude_setpoint_history.append(desired_altitude)

The above code will work perfectly fine if you want the arrays to grow infinitely large over time. Ain’t nobody got time for that so we need to check how long the arrays are and delete data. We’ll check the length of the x-axis array as a proxy for all the others and use that to determine what to do. Typing this code that looks very similar over and over again means it’s probably time to abstract it into classes or something else. The more you type something over and over again, the larger indication you have that you need to so something about it. But for now we’ll leave it like this for ease of reading and comprehension.

# if we reach our data limit set point, evict old data and add new.
# this helps keep the graph clean and prevents it from growing infinitely
if(len(x_axis_counters) > plot_array_max_length):
	x_axis_counters.pop(0)
	roll_history.pop(0)
	roll_setpoint_history.pop(0)
	pitch_history.pop(0)
	pitch_setpoint_history.pop(0)
	altitude_history.pop(0)
	altitude_setpoint_history.pop(0)

	x_axis_counters.append(i)
	roll_history.append(current_roll)
	roll_setpoint_history.append(desired_roll)
	pitch_history.append(current_pitch)
	pitch_setpoint_history.append(pitch_PID.SetPoint)
	altitude_history.append(current_altitude)
	altitude_setpoint_history.append(desired_altitude)
# else, just add new. we are not yet at limit.
else:
	x_axis_counters.append(i)
	roll_history.append(current_roll)
	roll_setpoint_history.append(desired_roll)
	pitch_history.append(current_pitch)
	pitch_setpoint_history.append(pitch_PID.SetPoint)
	altitude_history.append(current_altitude)
	altitude_setpoint_history.append(desired_altitude)
i = i + 1

You will notice that there are quite a few entries for altitude. We haven’t done anything with that yet so just set desired_altitude to an integer somewhere in the code so it doesn’t error out.

To complete the graphing portion, we need to actually plot the data. The clear=true in the below lines clears out the plot so we’re not replotting on top of the old data. We also need to process events to actually draw the graph:

# process events means draw the graphs
pg.QtGui.QApplication.processEvents()

# arguments are x values, y values, options
# pen is a different line in the plot
p1.plot(x_axis_counters, roll_history, pen=0, clear=True)
p1.plot(x_axis_counters, roll_setpoint_history, pen=1)

p2.plot(x_axis_counters, pitch_history, pen=0,clear=True)
p2.plot(x_axis_counters, pitch_setpoint_history, pen=1)

p3.plot(x_axis_counters, altitude_history, pen=0,clear=True)
p3.plot(x_axis_counters, altitude_setpoint_history, pen=1)

You can now run the code to see your graph populating with data!

PyQtGraph plotting the aircraft’s roll/pitch and desired roll/pitch

7 – Adding altitude autopilot (preselect and hold)

Ok so now that we have eye candy with the real-time graphs, we can make our autopilot do something useful: go to a selected altitude and hold it.

We already have the roll and pitch PIDs functioning as desired. How do we couple the pitch PID to get to the desired altitude? One cannot directly control altitude. Altitude is controlled via a combination of pitch and airspeed (and time).

We will call the coupled PIDs an inner loop (pitch) and an outer loop (altitude). The outer loop runs and its output will feed the input of the inner loop. The altitude PID will be fed a desired altitude and current altitude. The output will then mostly be the error (desired altitude – current altitude) multiplied by our P setting. Of course I and D will have a say in the output but by and large it will be some proportion of the error.

Let’s start with defining the altitude PID and desired altitude:

altitude_PID = PID.PID(P, I, D)
desired_altitude = 8000
altitude_PID.SetPoint = desired_altitude

With those defined, we now move to the main loop. The outer loop needs to be updated first. From there, we will normalize the output from the altitude PID and use that to set the pitch PID. The pitch PID will also be normalized to keep values in a reasonable range:

# update outer loops first
altitude_PID.update(current_altitude)

# if alt=12000, setpoint = 10000, the error is 2000. if P=0.1, output will be 2000*0.1=200
pitch_PID.SetPoint = normalize(altitude_PID.output, min=-15, max=10)

# update PIDs
roll_PID.update(current_roll)
pitch_PID.update(current_pitch)

# update control outputs
new_ail_ctrl = normalize(roll_PID.output)
new_ele_ctrl = normalize(pitch_PID.output)

Now we just need to send those new control surface commands and we’ll be controlling the plane!

Outputting the control deflections should basically be the last part of the loop. We’ll put it right before the debug output:

# sending actual control values to XPlane
ctrl = [new_ele_ctrl, new_ail_ctrl, 0.0, -998] # ele, ail, rud, thr. -998 means don't change
client.sendCTRL(ctrl)

Full code of pitch_roll_autopilot_with_graphing.py

import sys
import xpc
import PID
from datetime import datetime, timedelta
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore, QtGui
import time

def normalize(value, min=-1, max=1):
	# if value = 700, and max = 20, return 20
	# if value = -200, and min = -20, return -20
	if (value > max):
		return max
	elif (value < min):
		return min
	else:
		return value

update_interval = 0.050 # seconds, 0.05 = 20 Hz
start = datetime.now()
last_update = start

# defining the initial PID values
P = 0.1 # PID library default = 0.2
I = P/10 # default = 0
D = 0 # default = 0

# initializing PID controllers
roll_PID = PID.PID(P, I, D)
pitch_PID = PID.PID(P, I, D)
altitude_PID = PID.PID(P, I, D)

# setting the desired values
# roll = 0 means wings level
# pitch = 2 means slightly nose up, which is required for level flight
desired_roll = 0
desired_pitch = 2
desired_altitude = 8000

# setting the PID set points with our desired values
roll_PID.SetPoint = desired_roll
pitch_PID.SetPoint = desired_pitch
altitude_PID.SetPoint = desired_altitude

x_axis_counters = [] #0, 1, 2, 3, etc. just basic x-axis values used for plotting
roll_history = []
pitch_history = []
altitude_history = []
roll_setpoint_history = []
pitch_setpoint_history = []
altitude_setpoint_history = []
plot_array_max_length = 300 # how many data points to hold in our arrays and graph
i = 1 # initialize x_axis_counter

# first the base app needs to be instantiated
app = pg.mkQApp("python xplane autopilot monitor")

# now the window itself is defined and sized
win = pg.GraphicsLayoutWidget(show=True)
win.resize(1000,600) #pixels
win.setWindowTitle("XPlane autopilot system control")

# we have 3 subplots
p1 = win.addPlot(title="roll",row=0,col=0)
p2 = win.addPlot(title="pitch",row=1,col=0)
p3 = win.addPlot(title="altitude", row=2, col=0)

# show the y grid lines to make it easier to interpret the graphs
p1.showGrid(y=True)
p2.showGrid(y=True)
p3.showGrid(y=True)

DREFs = ["sim/cockpit2/gauges/indicators/airspeed_kts_pilot",
		"sim/cockpit2/gauges/indicators/heading_electric_deg_mag_pilot",
		"sim/flightmodel/failures/onground_any",
		"sim/flightmodel/misc/h_ind"]

def monitor():
	global i
	global last_update
	with xpc.XPlaneConnect() as client:
		while True:
			if (datetime.now() > last_update + timedelta(milliseconds = update_interval * 1000)):
				last_update = datetime.now()
				print(f"loop start - {datetime.now()}")

				posi = client.getPOSI();
				ctrl = client.getCTRL();
				multi_DREFs = client.getDREFs(DREFs)

				current_roll = posi[4]
				current_pitch = posi[3]
				current_hdg = multi_DREFs[1][0]
				current_altitude = multi_DREFs[3][0]
				current_asi = multi_DREFs[0][0]
				onground = multi_DREFs[2][0]

				# update the display
				pg.QtGui.QApplication.processEvents()

				# update outer loops first
				altitude_PID.update(current_altitude)

				# if alt=12000, setpoint = 10000, the error is 2000. if P=0.1, output will be 2000*0.1=200
				pitch_PID.SetPoint = normalize(altitude_PID.output, min=-15, max=10)

				# update PIDs
				roll_PID.update(current_roll)
				pitch_PID.update(current_pitch)

				# update control outputs
				new_ail_ctrl = normalize(roll_PID.output)
				new_ele_ctrl = normalize(pitch_PID.output)

				# if we reach our data limit set point, evict old data and add new.
				# this helps keep the graph clean and prevents it from growing infinitely
				if(len(x_axis_counters) > plot_array_max_length):
					x_axis_counters.pop(0)
					roll_history.pop(0)
					roll_setpoint_history.pop(0)
					pitch_history.pop(0)
					pitch_setpoint_history.pop(0)
					altitude_history.pop(0)
					altitude_setpoint_history.pop(0)

					x_axis_counters.append(i)
					roll_history.append(current_roll)
					roll_setpoint_history.append(desired_roll)
					pitch_history.append(current_pitch)
					pitch_setpoint_history.append(pitch_PID.SetPoint)
					altitude_history.append(0)
					altitude_setpoint_history.append(desired_altitude)
				# else, just add new. we are not yet at limit.
				else:
					x_axis_counters.append(i)
					roll_history.append(current_roll)
					roll_setpoint_history.append(desired_roll)
					pitch_history.append(current_pitch)
					pitch_setpoint_history.append(pitch_PID.SetPoint)
					altitude_history.append(0)
					altitude_setpoint_history.append(desired_altitude)
				i = i + 1

				p1.plot(x_axis_counters, roll_history, pen=0, clear=True)
				p1.plot(x_axis_counters, roll_setpoint_history, pen=1)

				p2.plot(x_axis_counters, pitch_history, pen=0,clear=True)
				p2.plot(x_axis_counters, pitch_setpoint_history, pen=1)

				p3.plot(x_axis_counters, altitude_history, pen=0,clear=True)
				p3.plot(x_axis_counters, altitude_setpoint_history, pen=1)

				# sending actual control values to XPlane
				ctrl = [new_ele_ctrl, new_ail_ctrl, 0.0, -998] # ele, ail, rud, thr. -998 means don't change
				client.sendCTRL(ctrl)

				output = f"current values --    roll: {current_roll: 0.3f},  pitch: {current_pitch: 0.3f}"
				output = output + "\n" + f"PID outputs    --    roll: {roll_PID.output: 0.3f},  pitch: {pitch_PID.output: 0.3f}"
				output = output + "\n"
				print(output)

if __name__ == "__main__":
	monitor()

Using the autopilot / Conclusion

To use the autopilot, fire up XPlane, hop in a small-ish plane (gross weight less than 10k lb), take off, climb 1000′, then execute the code. Your plane should bring roll to 0 pretty quick and start the climb/descent to the desired altitude.

X-Plane Python autopilot leveling off at 8000' with pitch/roll/altitude graphed in real-time
X-Plane Python autopilot leveling off at 8000′ with pitch/roll/altitude graphed in real-time