Categories
Uncategorized

Nanosecond accurate PTP server (grandmaster) and client tutorial for Raspberry Pi

Introduction

In the last two PPS posts (the original in 2021 and the revisit in 2025), we explored how to get microsecond-accurate time with a Raspberry Pi and a GPS module that outputs a once-per-second pulse (PPS). That project was a ton of fun—and borderline overkill for most home setups—but it got us into the realm of microseconds! Now we’re going to shoot for yet another SI prefix leap and aim for nanosecond accuracy. That’s 1 ns = 0.000000001 seconds (alternatively, it means there are 1 billion nanoseconds in one second).

How? By using the Precision Time Protocol (PTP, IEEE 1588). PTP is designed for high-precision time synchronization over a network, commonly used in financial trading, industrial control, and telecom environments. With the right hardware and configuration, you can synchronize clocks across your devices to within hundreds of nanoseconds with common homelab gear. Is the title a little misleading? Maybe, but technically it still makes sense to use the nano prefix for the numbers that we’re talking about here (anything >1000 nanoseconds should probably be referred to in microseconds).

To be clear, the nanosecond here refers to the synchronization between devices on your network! Depending on how your Pi is set up, and the quality of it’s oscillator, it is unlikely that your Pi’s timing, as determined by the PPS signals, will be as accurate or precise as the PTP synchronization.

As always, do you need nanosecond-level timing at home? Absolutely, 100% no. But this is Austin’s Nerdy Things, so here we are (again)!

Why would you need time this accurate at home?

You don’t, at all. Even microsecond-level accuracy is already overkill for home usage. But there are some niche use cases:

  • Amateur radio or signal processing that needs super-tight phase alignment.
  • High-speed data acquisition where you want to correlate measurements with precise timestamps.
  • Simply pushing the limits of what’s possible because (if you read far enough back in my about me) the last four digits of my phone number spell NERD (seriously. and I’ve had my phone number since I was 15.)

PTP can outperform NTP by a few orders of magnitude if everything is set up correctly with hardware timestamping. With PTP, your network cards (and potentially switches) handle timestamps in hardware, avoiding much of the jitter introduced by the kernel and software layers.

Diagram showing the various places timestamping can occur in the processing of a ethernet packet, the closer to the link the better for timing purposes. Source: https://networklessons.com/ip-services/introduction-to-precision-time-protocol-ptp

Disclaimer

My experiments appear to be relatively successful but I need to get this out of the way: this level of timing is solidly into the realm of experts. I kinda sorta understand most of what’s going on here but there are a ton of super detailed nuances that go way over my head. Pretty sure some people spend a lifetime on this kind of stuff (particularly at places like the US National Institute of Standards and Technology – NIST, which is “up the street” from where I live and is one of the NTP sources I use). Nanoseconds are being reported but I have no way to verify.

Materials needed

  • Two machines/computers with NIC (network interface card) that have hardware timestamping – many server NICs have this, and quite a few “prosumer” Intel NICs do too (examples: i210, i340, i350, some i225/i226), and, essential for the revisiting PPS NTP post, Raspberry Pi 5s do too. PTP is also known as IEEE 1588, which is the PTP standard, so you may see either on datasheets.
  • A very local area network. From what I’ve read, this won’t work well over a WAN, especially if there is asymmetric latency (a typical homelab network, even across a couple switches, will be fine)
  • A machine with highly accurate time (perhaps from PPS GPS sync) to be used as the “grandmaster”, which is PTP-speak for server.

Procedure

The general procedure will be to set up the server first, which involves syncing the PHC (physical hardware clock) of the NIC to the system clock, which is discipline from elsewhere. After the PHC is synchronized to the system clock, we will use linuxptp (ptp4l) to act as a server. After that, we will essentially do the opposite on any client machines – synchronize the PHC from the PTP grandmaster, and then sync the system clock with the PHC.

0 – Ensure your NIC supports hardware timestamps

Run ethtool to check if your NIC supports hardware timestamps. The format is ethtool -T [nic name]. My NIC is named enp0s31f6 so I will use that. This is a I219-LM in a Dell Optiplex 7040 which is not exactly new but works very well as a Proxmox Backup Server.

ethtool -T enp0s31f6
root@pbs:~# ethtool -T enp0s31f6
Time stamping parameters for enp0s31f6:
Capabilities:
        hardware-transmit
        software-transmit
        hardware-receive
        software-receive
        software-system-clock
        hardware-raw-clock
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
        off
        on
Hardware Receive Filter Modes:
        none
        all
        ptpv1-l4-sync
        ptpv1-l4-delay-req
        ptpv2-l4-sync
        ptpv2-l4-delay-req
        ptpv2-l2-sync
        ptpv2-l2-delay-req
        ptpv2-event
        ptpv2-sync
        ptpv2-delay-req

root@pbs:~# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 48:4d:7e:db:98:6b brd ff:ff:ff:ff:ff:ff

root@pbs:~# lspci | grep Ether
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)

The lines to look for are in the capabilities section

  • hardware-transmit
  • hardware-receive

We have those so we’re good to go on the client side. I haven’t explored those hardware receive filter modes yet but they look interesting.

The server is the Raspberry Pi 5 which shows similar output:

austin@raspberrypi5:~ $ ethtool -T eth0
Time stamping parameters for eth0:
Capabilities:
        hardware-transmit
        software-transmit
        hardware-receive
        software-receive
        software-system-clock
        hardware-raw-clock
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
        off
        on
        onestep-sync
Hardware Receive Filter Modes:
        none
        all

1 – Synchronize the hardware clock

First, install linuxptp on both server and client

sudo apt install linuxptp

With linuxptp installed, we will use phc2sys to synchronize the various clocks. Despite the name, phc2sys can be used to synchronize either direction (from PHC to system clock or from system clock to PHC).

With that out of the way, let’s get to the command:

# s = source
# c = destination, replace with your NIC name
# O = offset, PTP traditionally uses TAI, which doesn't use leap seconds and as of Feb 2025, is 37 seconds off of UTC, 0 means use whatever system clock is using
# step_threshold means any delta above this amount should just be jumped instead of slowly shifted by fast/slow frequency
# m = print out status messages
sudo phc2sys -s CLOCK_REALTIME -c eth0 -O 0 --step_threshold=0.5 -m

And the results:

screenshot of phc2sys synchronizing the PHC of a Raspberry Pi 5 NIC with the system clock

Here we see three fields with numbers (offset/delay in nanoseconds and freq in parts per billion (ppb)):

  • Offset is how far off the PHC is from the realtime clock (starting at 3.4 million nanoseconds = 3.4 milliseconds and then stepping down to 28 nanoseconds)
  • Frequency is the frequency adjustment of the destination clock (in this case, the eth0 NIC PHC)
  • Delay is the estimated amount of time to get the message from the source to destination (which is suspiciously high for this NIC, other machines typically show much lower numbers)

Leave this running (we’ll daemon-ize things at the end).

2 – Tune the Raspberry Pi 5 NIC driver to reduce latency

Raspberry Pi ethernet driver by collects packets over a period of time, which is 49 microseconds by default.

Raspberry Pi showing 49 microseconds of packet coalescing

We can reduce that to the driver minimum of 4 microseconds:

sudo ethtool -C eth0 tx-usecs 4
sudo ethtool -C eth0 rx-usecs 4

This will not persist over reboot, so we’ll need to add a daemon to do so upon boot.

In /etc/systemd/system/ptp_nic_coalesce.service:

[Unit]
Description=NIC coalesce minimize
Requires=network.target
After=network.target

[Service]
ExecStart=/usr/sbin/ethtool -C eth0 tx-usecs 4 rx-usecs 4
Type=oneshot

[Install]
WantedBy=multi-user.target

Then enable/start:

sudo systemctl enable ptp_nic_coalesce.service
sudo systemctl start ptp_nic_coalesce.service

3 – Serve the PHC time over the network

Next up is to use ptp4l to serve the time via PTP over your network.

We need a configuration file to give to ptp4l. This isn’t entirely necessary, most config items can be presented as arguments in the command line but I like config files.

Call this file whatever (perhaps ptp-gm.conf, for precision time protocol grandmaster):

[global]
# extra logging
verbose               1

# use hardware timestamping (alternative is software, which isn't nearly as accurate/precise)
time_stamping         hardware

# you can specify a "domain number", which is analogus to VLAN
#domainNumber          0

# force this node to act as a master (won't revert to slave).
masterOnly            1

# priority settings, 128 is default. lower numbers are higher priority in case there are multiple grandmasters
priority1             128

# clockClass=6 for GNSS reference
# other classes = https://documentation.nokia.com/srlinux/24-10/books/network-synchronization/ieee-1588-ptp.html
clockClass            6

# timeSource is where time comes from - 0x10 is "atomic clock" which is a bit sus for us but not ultimately wrong
# https://support.spirent.com/csc30/s/article/FAQ14011
timeSource            0x10

# log output to a file, summary interval is 2^x, so 1 = 2^1 = every 2 seconds
# can also output with -m
# summary_interval     1
# logfile             /var/log/ptp4l.log

Now run ptp4l also!

sudo ptp4l -f ptp-gm.conf -i eth0

You’ll see some outputs around getting things set up and running. Key things to look for “selected local clock … as best master” and “assuming grand master role”. The MAC shown is actually from the NIC.

Raspberry Pi 5 acting as PTP grandmaster, using the physical hardware clock of the NIC as the “local clock”, which is synchronized with the realtime clock via phc2sys which is synchronized via PPS/GPS.

Now we are ready to serve this time to clients.

4 – Receive PTP over the network

To get PTP over the network, you can use NICs that support software timestamping but we’re going for higher accuracy/precision than that so select a machine that has a NIC that supports PTP/IEEE 1588 (see step 0 for reference).

Setting system time via PTP is really a two step process – synchronizing the NIC PHC with PTP and then using phc2sys to synchronize the system clock with the PHC. If you are thinking this sounds similar to the end of step 2, you are correct, it is just in reverse for the clients.

Diagram showing the source -> PHC -> system clock -> PTP -> network -> PTP -> PHC -> system clock flow. Source: https://latency-matters.medium.com/be-back-on-time-b3267f62d76a

We will use ptp4l again to set the PHC via PTP:

root@pbs:~# ptp4l -i enp0s31f6 -m --summary_interval 1

And you will start seeing some init messages followed by some statistics scrolling past:

ptp4l as slave showing double-digit nanosecond synchronization

The output looks a bit different if there are more requests/polls than summary outputs – RMS will be added, which is root mean squared error, along with max error, and some +/- indicators on the frequency and delay. That delay is still suspicious…

We see here that we have achieved double-digit nanosecond synchronization across the network!

Now compare to a Supermicro Xeon v4 server running Intel i350 NICs synchronizing to a OSA 5401 SyncPlug – super stable and tight precision.

ptp4l as slave showing single-digit nanosecond synchronization

The OSA 5401 has an oscillator rated to 1 ppb, and is exceptionally stable. That is half of the equation – the i350 is certainly better than the i219, but probably not by orders of magnitude like the OSA 5401 is.

Oscilloquartz OSA 5401 SyncPlug in a Brocade switch with GPS antenna connected to it’s SMA port, showing the once per second green LED lit. Bet you’ve never seen a GPS antenna port on a SFP module before. This is a complete computer in the SFP module.

Actually, I can try synchronizing the i219-LM to the syncplug. Long story short on this, I use ethernet layer 2 on this system (not layer 3) because the proxmox hosts share their NICs and it was just how I did it originally. I also use domain number = 24 because that’s what the OSA 5401 came with from eBay.

We can see it is a little bit better, but still not nearly as good as the i350. I am now tempted to try my Solarflare/Mellanox NICs, especially since the one I just looked at in my cupboard has u.fl connectors for both PPS in and out… topic for a future post.

With the PHC of the client system synchronized, we are 3/4 of the way to completion. Last up – setting the system clock from the PHC.

5 – Setting client clock from PHC

I originally just use the PHC in Chrony as a source. This will work well. Through my research for this post, I saw that it also possible to share a memory segment from ptp4l to Chrony. I like just using the PHC so we’ll use that approach here.

Add this line to your chrony config:

refclock PHC /dev/ptp0 poll 0 dpoll -5 tai

The poll = 0 means poll the source every second, dpoll -5 means query the source many times per second (2^-5 = 32 hz), and tai is the 37 second offset.

Restart chrony

sudo systemctl restart chrony

After a minute or so check Chrony’s sources with – chronyc sources:

root@prox-1u-01:~# chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
#* PHC0                          0   0   377     0     -1ns[   -1ns] +/-   13ns
=? pi-ntp.home.fluffnet.net      0   4     0     -     +0ns[   +0ns] +/-    0ns
^- 10.98.1.18                    2   7   377   82h   -117ms[  +69us] +/-  660us
=- 10.98.1.174                   1   0   377     1    -60us[  -60us] +/-   31us

We can see that Chrony has successfully selected the PHC and has synchronized the system clock with it to within one single nanosecond!

6 – Serving PTP to other clients

You can of course repeat the process for N other clients. Alternatively, you can just have Chrony use hardware timestamps and enable the F323 experimental field. That enables some NTPv5 features that help a ton with synchronization. There is also a F324 field that I haven’t tried that appears to run PTP over NTP packets.

The relevant lines from my Chrony config:

peer 10.98.1.172 minpoll 0 maxpoll 0 iburst xleave extfield F323
peer 10.98.1.174 minpoll 0 maxpoll 0 iburst xleave extfield F323

# use tai if synchronized to a true TAI source, which ignores leap seconds
refclock PHC /dev/ptp0 poll 0 dpoll -5 tai prefer trust

allow all

hwtimestamp *

7 – Setting up the daemons and enabling

File – /etc/systemd/system/phc2sys-grandmaster.service

[Unit]
Description=sync PHC with system clock (grandmaster)
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/sbin/phc2sys -s CLOCK_REALTIME -c eth0 -O 0 --step_threshold=0.5
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target

File – /etc/systemd/system/ptp4l-grandmaster.service

[Unit]
Description=Precision Time Protocol (PTP) service (Grandmaster)
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/sbin/ptp4l -f /etc/ptp4l/ptp-gm.conf -i eth0
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target

Grandmaster services reload & start:

sudo systemctl daemon-reload
sudo systemctl enable phc2sys-grandmaster.service
sudo systemctl enable ptp4l-grandmaster.service
sudo systemctl start phc2sys-grandmaster.service
sudo systemctl start ptp4l-grandmaster.service

File – /etc/systemd/system/ptp4l-client.service

[Unit]
Description=Precision Time Protocol (PTP) service (Client)
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/sbin/ptp4l -i [NIC here] -m --summary_interval 3
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target

And if you don’t want to mess with Chrony and just want to synchronize your system clock directly from the PHC on your clients – /etc/systemd/system/phc2sys-client.service

[Unit]
Description=Synchronize system clock with PHC (Client)
After=ptp4l-client.service
Requires=ptp4l-client.service

[Service]
Type=simple
ExecStart=/usr/sbin/phc2sys -s [NIC here] -c CLOCK_REALTIME -O 0 --step_threshold=0.5 -m
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target

Client services reload & start:

sudo systemctl daemon-reload
sudo systemctl enable ptp4l-client.service
sudo systemctl enable phc2sys-client.service  # If not using Chrony
sudo systemctl start ptp4l-client.service
sudo systemctl start phc2sys-client.service   # If not using Chrony

Conclusion

We’ve come a long way in our pursuit of precise timing – from using GPS PPS signals for microsecond accuracy to achieving nanosecond-level synchronization with PTP. While this level of precision is absolutely overkill for most home setups (as was the microsecond timing from our previous adventures), it demonstrates what’s possible with relatively accessible hardware like the Raspberry Pi 5 and common Intel NICs.

The key takeaways from this exploration:

  • PTP with hardware timestamping can achieve double-digit nanosecond synchronization even with consumer-grade hardware
  • The quality of your network interface cards matters significantly – as we saw comparing the i219-LM, i350, and the OSA 5401
  • Simple optimizations like adjusting packet coalescing can have meaningful impacts on timing precision
  • Modern tools like Chrony make it relatively straightforward to integrate PTP into your existing time synchronization setup

For those interested in pushing timing precision even further, there are still frontiers to explore – from specialized timing NICs to advanced PTP profiles. But for now, I think I’ll stop here and enjoy my massively overengineered home time synchronization setup. At least until the next timing-related rabbit hole comes along…

References:

https://github.com/by/ptp4RaspberryPi/blob/main/Setup.md

https://chrony-project.org/documentation.html

https://linuxptp.nwtime.org/documentation/ptp4l

https://chrony-project.org/examples.html

Categories
Uncategorized

UUIDv7 site launched!

For a little side project, I build uuid7.com almost entirely with the help of AI tools. Check it out!

I also built the stack with Docker Compose. I have resisted Docker for so long because it was such a paint for homelab type stuff. But I recently started needing to use it at work (we are migrating to AWS Kubernetes – yay! not) so I figured I’d give it a go.

With the assistance of ChatGPT, I put together a full Docker stack using Cloudflare tunnels (cloudflared), Nginx as the webserver, and Flask as the backend all in a couple hours. It works great!

That said, it is running on my main desktop at home to see if it’s a popular site so fingers crossed it holds up.

Categories
Uncategorized

Development has begun on MonchMatch!

I have finally started actual work on a “Tinder for restaurants” that I’m calling MonchMatch. I’ve been tossing the idea around for a few months but have finally made some moves with outsourcing the design. I can get the app and backend going myself, but I have zero creative talent.

If you are or know anyone who wants to work on the design, please drop me a note.

If you’re interested in being notified when MonchMatch is released, head on over to the MonchMatch site and enter your email!

Categories
proxmox Uncategorized

Proxmox corosync issues

Sometimes I make a change to my Proxmox cluster configuration without all nodes in a healthy state (i.e. they are off). This isn’t a great habit to get into and sometimes results in troubleshooting.

Putting a quick post up so I can easily reference how to resolve corosync issues.

# stop corosync and pmxcfs on all nodes
$ systemctl stop corosync pve-cluster

# start pmxcfs in local mode on all nodes
$ pmxcfs -l

# put correct corosync config into local pmxcfs and corosync config dir (make sure to bump the 'config_version' inside the config file)
$ cp correct_corosync.conf /etc/pve/corosync.conf
$ cp correct_corosync.conf /etc/corosync/corosync.conf

# kill local pmxcfs
$ killall pmxcfs

# start corosync and pmxcfs again
$ systemctl start pve-cluster corosync

# check status
$ journalctl --since '-5min' -u pve-cluster -u corosync
$ pvecm status

Source: https://forum.proxmox.com/threads/made-mistake-in-corosync-conf-now-cannot-edit.77173/

Some errors I got to help with search engines:

ipcc_send_rec[1] failed: Connection refused
ipcc_send_rec[2] failed: Connection refused
ipcc_send_rec[3] failed: Connection refused
Unable to load access control list: Connection refused
Categories
Offgrid Solar Uncategorized

Sending MPP inverter data to MQTT and InfluxDB with Python

Catching up

Hey there, welcome back to Austin’s Nerdy Things. It’s been a while since my last post – life happens. I haven’t lost sight of the blog. Just needed to do some other things and finish up some projects before documenting stuff.

Background

If you recall, my DIY hybrid solar setup with battery backup is up and running. I wrote about it here – DIY solar with battery backup – up and running!

The MPP LV2424 inverter I’m using puts out a lot of data. Some of it is quite useful, some of it not so much. Regardless, I am capturing all of it in my InfluxDB database with Python. This allows me to chart it in Grafana, which I use for basic monitoring stuff around the house. This post will document getting data from the MPP inverter to Grafana.

Jumping ahead

Final product first. This is a screenshot showing my complete (work-in-progress) Grafana dashboard for my DIY hybrid solar setup.

screenshot showing Grafana dashboard which contains various charts and graphs of MPP solar inverter voltages and datat
Solar dashboard as seen in Grafana pulling data from InfluxDB

The screenshot shows the last 6 hours worth of data from my system. It was a cloudy and then stormy afternoon/evening here in Denver (we got 0.89 inches of rain since midnight as of typing this post!), so the solar production wasn’t great. The panels are as follows:

  • Temperatures – showing temperatures from 3 sources: the MPP LV2424 inverter heat sink, and both BMS temperature probes (one is on the BMS board itself, the other is a probe on the battery bank). The inverter has at least 2 temperature readings, maybe 3. They all basically show the same thing.
  • Solar input – shows solar voltage as seen by the solar charge controller as well as solar power (in watts) going through the charge controller.
  • Cell voltages – the voltage reading of each of my 8 battery bank cells as reported by the BMS. The green graph also shows the delta between max and min cells. They are still pretty balanced in the flat part of the discharge curve.
  • Total power – a mashup of what the inverter is putting out, what the solar is putting in, the difference between the two, and what the BMS is reporting. I’m still trying to figure out all the nuances here. There is definitely a discrepancy between what the inverter is putting out and what the solar is putting in when the batteries are fully charged. I believe the difference is the power required to keep the transformer energized (typically ranges from 30-60W).
  • DC voltages – as reported by the inverter and the BMS. The inverter is accurate to 0.1V, the BMS goes to 0.01V.
  • AC voltages – shows the input voltage from the grid and the output voltage from the inverter. These will match unless the inverter is disconnected from the grid.
  • Data table – miscellaneous information from the inverter that isn’t graphable
  • Output load – how much output I’m using compared to the inverter’s limit
  • Total Generation – how much total energy has been captured by the inverter/solar panels. This is limited because I’m not back feeding the grid.

Getting data out of the MPP Solar LV2424 inverter with Python to MQTT

I am using two cables to plug the inverter into a computer. The first is the serial cable that came with the inverter. The second is a simple USB to RS232 serial adapter plugged into a Dell Micro 3070.

The computer is running Proxmox, which is a virtual machine hypervisor. That doesn’t matter for this post, we can ignore it. Just pretend the USB cable is plugged directly into a computer running Ubuntu 20.04 Linux.

The main bit of software I’m using is published on GitHub by jblance under the name ‘mpp-solar’ – https://github.com/jblance/mpp-solar. This is a utility written in Python to communicate with MPP inverters.

There was a good bit of fun trying to figure out exactly what command I needed to run to get data out of the inverter as evidenced by the history command:

Trying to figure out the usage of the Python mpp-solar utility

In the end, what worked for me to get the data is the following. I believe the protocol (-P) will be different for different MPP Solar inverters:

sudo mpp-solar -p /dev/ttyUSB0 -b 2400 -P PI18 --getstatus

And the results are below. The grid voltage reported is a bit low because the battery started charging from the grid a few minutes before this was run.

austin@mpp-linux:~/mpp-solar-python$ sudo mpp-solar -p /dev/ttyUSB0 -b 2400 -P PI18 --getstatus
Command: ET - Total Generated Energy query
------------------------------------------------------------
Parameter                       Value           Unit
working_mode                    Hybrid mode(Line mode, Grid mode)
grid_voltage                    111.2           V
grid_frequency                  59.9            Hz
ac_output_voltage               111.2           V
ac_output_frequency             59.9            Hz
ac_output_apparent_power        155             VA
ac_output_active_power          139             W
output_load_percent             5               %
battery_voltage                 27.1            V
battery_voltage_from_scc        0.0             V
battery_voltage_from_scc2       0.0             V
battery_discharge_current       0               A
battery_charging_current        19              A
battery_capacity                76              %
inverter_heat_sink_temperature  30              °C
mppt1_charger_temperature       0               °C
mppt2_charger_temperature       0               °C
pv1_input_power                 0               W
pv2_input_power                 0               W
pv1_input_voltage               0.0             V
pv2_input_voltage               0.0             V
setting_value_configuration_state       Something changed
mppt1_charger_status            abnormal
mppt2_charger_status            abnormal
load_connection                 connect
battery_power_direction         charge
dc/ac_power_direction           AC-DC
line_power_direction            input
local_parallel_id               0
total_generated_energy          91190           Wh

And to get the same data right into MQTT I am using the following:

sudo mpp-solar -p /dev/ttyUSB0 -P PI18 --getstatus -o mqtt -q mqtt --mqtttopic mpp

The above command is being run as a cron job once a minute. The default baud rate for the inverter is 2400 bps (yes, bits per second), which is super slow so a full poll takes ~6 seconds. Kind of annoying in 2021 but not a huge problem. The cron entry for the command is this:

# this program feeds a systemd service to convert the outputted mqtt to influx points
* * * * * /usr/local/bin/mpp-solar -p /dev/ttyUSB0 -P PI18 --getstatus -o mqtt -q mqtt --mqtttopic mpp

So with that we have MPP inverter data going to MQTT.

Putting MPP data into InfluxDB from MQTT

Here we need another script written in… take a guess… Python! This Python basically just opens a connection to a MQTT broker and transmits any updates to InfluxDB. The full script is a bit more complicated and I actually stripped a lot out because my MQTT topic names didn’t fit the template the original author used. I have started using this framework in other places to do the MQTT to InfluxDB translation. I like things going to the intermediate MQTT so they can be picked up for easy viewing in Home Assistant. Original code from https://github.com/KHoos/mqtt-to-influxdb-forwarder. The original code seems like it was built to be more robust than what I’m using it for (read: I have no idea what half of it does) but it worked for my purposes.

You’ll need a simple text file with your InfluxDB password and then reference it in the arguments. If your password is ‘password’, the only contents of the file should be ‘password’. I added the isFloat() function to basically make sure that strings weren’t getting added to the numeric tables in InfluxDB. I honestly find the structure/layout of storing stuff in Influx quite confusing so I’m sure there’s a better way to do this.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
####################################
# originally found at/modified from https://github.com/KHoos/mqtt-to-influxdb-forwarder
####################################

# forwarder.py - forwards IoT sensor data from MQTT to InfluxDB
#
# Copyright (C) 2016 Michael Haas <[email protected]>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301  USA

import argparse
import paho.mqtt.client as mqtt
from influxdb import InfluxDBClient
import json
import re
import logging
import sys
import requests.exceptions


class MessageStore(object):

        def store_msg(self, node_name, measurement_name, value):
                raise NotImplementedError()

class InfluxStore(MessageStore):

        logger = logging.getLogger("forwarder.InfluxStore")

        def __init__(self, host, port, username, password_file, database):
                password = open(password_file).read().strip()
                self.influx_client = InfluxDBClient(
                        host=host, port=port, username=username, password=password, database=database)

        def store_msg(self, database, sensor, value):
                influx_msg = {
                        'measurement': database,
                        'tags': {'sensor': sensor},
                        'fields': {'value' : value}
                }
                self.logger.debug("Writing InfluxDB point: %s", influx_msg)
                try:
                        self.influx_client.write_points([influx_msg])
                except requests.exceptions.ConnectionError as e:
                        self.logger.exception(e)

class MessageSource(object):

        def register_store(self, store):
                if not hasattr(self, '_stores'):
                        self._stores = []
                self._stores.append(store)

        @property
        def stores(self):
                # return copy
                return list(self._stores)

def isFloat(str_val):
  try:
    float(str_val)
    return True
  except ValueError:
    return False

def convertToFloat(str_val):
    if isFloat(str_val):
        fl_result = float(str_val)
        return fl_result
    else:
        return str_val

class MQTTSource(MessageSource):

        logger = logging.getLogger("forwarder.MQTTSource")

        def __init__(self, host, port, node_names, stringify_values_for_measurements):
                self.host = host
                self.port = port
                self.node_names = node_names
                self.stringify = stringify_values_for_measurements
                self._setup_handlers()

        def _setup_handlers(self):
                self.client = mqtt.Client()

                def on_connect(client, userdata, flags, rc):
                        self.logger.info("Connected with result code  %s", rc)
                        # subscribe to /node_name/wildcard
                        #for node_name in self.node_names:
                        # topic = "{node_name}/#".format(node_name=node_name)
                        topic = "get_status/status/#"
                        self.logger.info("Subscribing to topic %s", topic)
                        client.subscribe(topic)

                def on_message(client, userdata, msg):
                        self.logger.debug("Received MQTT message for topic %s with payload %s", msg.topic, msg.payload)
                        list_of_topics = msg.topic.split('/')
                        measurement = list_of_topics[2]
                        if list_of_topics[len(list_of_topics)-1] == 'unit':
                                value = None
                        else:
                                value = msg.payload
                                decoded_value = value.decode('UTF-8')
                                if isFloat(decoded_value):
                                        str_value = convertToFloat(decoded_value)
                                        for store in self.stores:
                                                store.store_msg("power_measurement",measurement,str_value)
                                else:
                                        for store in self.stores:
                                                store.store_msg("power_measurement_strings",measurement,decoded_value)





                self.client.on_connect = on_connect
                self.client.on_message = on_message

        def start(self):
                print(f"starting mqtt on host: {self.host} and port: {self.port}")
                self.client.connect(self.host, self.port)
                # Blocking call that processes network traffic, dispatches callbacks and
                # handles reconnecting.
                # Other loop*() functions are available that give a threaded interface and a
                # manual interface.
                self.client.loop_forever()


def main():
        parser = argparse.ArgumentParser(
                description='MQTT to InfluxDB bridge for IOT data.')
        parser.add_argument('--mqtt-host', default="mqtt", help='MQTT host')
        parser.add_argument('--mqtt-port', default=1883, help='MQTT port')
        parser.add_argument('--influx-host', default="dashboard", help='InfluxDB host')
        parser.add_argument('--influx-port', default=8086, help='InfluxDB port')
        parser.add_argument('--influx-user', default="power", help='InfluxDB username')
        parser.add_argument('--influx-pass', default="<I have a password here, unclear if the pass-file takes precedence>", help='InfluxDB password')
        parser.add_argument('--influx-pass-file', default="/home/austin/mpp-solar-python/pass.file", help='InfluxDB password file')
        parser.add_argument('--influx-db', default="power", help='InfluxDB database')
        parser.add_argument('--node-name', default='get_status', help='Sensor node name', action="append")
        parser.add_argument('--stringify-values-for-measurements', required=False,      help='Force str() on measurements of the given name', action="append")
        parser.add_argument('--verbose', help='Enable verbose output to stdout', default=False, action='store_true')
        args = parser.parse_args()

        if args.verbose:
                logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
        else:
                logging.basicConfig(stream=sys.stdout, level=logging.WARNING)

        print("creating influxstore")
        store = InfluxStore(host=args.influx_host, port=args.influx_port, username=args.influx_user, password_file=args.influx_pass_file, database=args.influx_db)
        print("creating mqttsource")
        source = MQTTSource(host=args.mqtt_host,
                                                port=args.mqtt_port, node_names=args.node_name,
                                                stringify_values_for_measurements=args.stringify_values_for_measurements)
        print("registering store")
        source.register_store(store)
        print("start")
        source.start()

if __name__ == '__main__':
        main()

Running the MQTT to InfluxDB script as a system daemon

Next up, we need to run the MQTT to InfluxDB Python script as a daemon so it starts with the machine and runs in the background. If you haven’t noticed by now, this is the standard pattern for most of the stuff I do – either a cron job or daemon to get data and another daemon to put it where I want it. Sometimes they’re the same.

austin@mpp-linux:~$ cat /etc/systemd/system/mpp-solar.service
[Unit]
Description=MPP inverter data - MQTT to influx
After=multi-user.target

[Service]
User=austin
Type=simple
Restart=always
RestartSec=10
# data feeds this script from a root cronjob running every 60s
ExecStart=/usr/bin/python3 /home/austin/mpp-solar-python/main.py

[Install]
WantedBy=multi-user.target

Then activate it:

austin@mpp-linux:~$ sudo systemctl daemon-reload
austin@mpp-linux:~$ sudo systemctl enable mpp-solar.service
austin@mpp-linux:~$ sudo systemctl start mpp-solar.service
austin@mpp-linux:~$ sudo systemctl status mpp-solar.service
● mpp-solar.service - MPP inverter data - MQTT to influx
     Loaded: loaded (/etc/systemd/system/mpp-solar.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2021-07-31 02:21:32 UTC; 1 day 13h ago
   Main PID: 462825 (python3)
      Tasks: 1 (limit: 1072)
     Memory: 19.2M
     CGroup: /system.slice/mpp-solar.service
             └─462825 /usr/bin/python3 /home/austin/mpp-solar-python/main.py

Jul 31 02:21:32 mpp-linux systemd[1]: Started MPP inverter data - MQTT to influx.

All done

With that, you should now have data flowing into your InfluxDB instance from your MPP inverter via this Python script. This is exactly what I’m using for my LV2424 but it should work with others like the PIP LV2424 MSD, PIP-4048MS, IPS stuff, LV5048, and probably a lot of others.

Next Steps

Next post will cover designing the Grafana dashboard to show this data.