Going Lights Out: Building a Real-Time F1 Lighting System in my Smart Home (Part 2)
This is Part 2 of a three-part series on how I built a real-time lighting system for my smart home that reacts to live Formula 1 races. In the previous part I talked about the inspiration for the project. In this part, I’ll dive into the Python code, the struggles of parsing long string of text data and my final achievement of conquering time itself. In the next and final part, we’ll deeper into the Home Assistant configuration

In part 1 I explained what inspired me to look for a live F1 expansion, my struggles finding a fitting solution, and how even making something myself seemed like a bridge too far. But as I was about to give up I found the magic command in the FastF1 API:
fastf1.livetiming save --append cache.txt
Not that it was obvious to begin with. When I first tried out the FastF1 library it was pretty clear that the main intended use was for previous broadcasts, and even the documentation for the livetiming function stated that you were not able to process the data as it was being recorded. But having nothing to lose anyways I sneakily got it running during a free practice session while I was at work. To my great delight line after line started to appear in the cache file: ‘RaceControlMessage’, ‘WeatherData’, ‘TimingData’. None of it truly made sense yet, but clearly I could use this somehow. Then it happened; A yellow flag was waving on track, eagerly I dove into the cache file searching for it, and that is when I found it:
['RaceControlMessages', {'Messages': {'56': {'Utc': '2025-07-05T11:39:56', 'Category': 'Flag', 'Flag': 'YELLOW', 'Scope': 'Sector', 'Sector': 2, 'Message': 'YELLOW IN TRACK SECTOR 2'}}}, '2025-07-05T11:39:56.262Z']
It was a messy, cryptic line buried in a jungle of text, but it was a signal. It was proof that this data could be read, parsed and acted upon. The project was on.
Challenge 1: The Parsing
It was clear that to make this work I had to cache the livetiming, and simultaneously read the cache. Then in the reader I would have to parse the incoming lines of data, ignore 95% of it and only parse the data of interest. Since there was no documentation I had to figure out these things for my setup to work:
- Who is in the lead: So what does the lead look like in the data? Is it a "Now this driver leads the session!" or do I have to manually figure this out myself?
- Events on track: flags, incidents and safety cars. How are they recorded, and how are they neutralized?
- Custom session events: Things like race start vs. start of Free practice, or how to go from Q1 to Q2 in a qualifying session.
This challenge was actually two-fold; 1 was the sheer amount of lines I had to look through, but the second was that even reading the lines didn't inherently make any sense. Take this line for example:
['TimingData', {'Lines': {'5': {'TimeDiffToPositionAhead': '+1.566'}, '10': {'TimeDiffToFastest': '+1.719', 'TimeDiffToPositionAhead': '+0.383', 'Line': 16, 'Position': '16', 'NumberOfLaps': 11, 'Sectors': {'2': {'Value': '25.189', 'PersonalFastest': True}}, 'Speeds': {'FL': {'Value': '337', 'PersonalFastest': True}}, 'BestLapTime': {'Value': '1:44.073', 'Lap': 10}, 'LastLapTime': {'Value': '1:44.073', 'PersonalFastest': True}}, '18': {'Line': 19, 'Position': '19'}, '27': {'TimeDiffToPositionAhead': '+0.181', 'Line': 17, 'Position': '17'}, '43': {'Line': 18, 'Position': '18'}}}, '2025-09-20T09:07:19.672Z']
It indicates that this is a timing entry, but is it for a sector, a full lap, and is the data broken down into deltas, pure timestamp or a finished parsed laptime? It became a tedious process of slowly figuring out what to look for in the data; one screen with action going on, the other with the lines of data coming in. Together with clear instructions to an AI it slowly started to unravel, and the tedious process of reverse-engineering slowly turned into concrete Python functions, like this one determining the fastest lap:
def process_lap_time_line(line: str, state: SessionState, mqtt_handler: MQTTHandler) -> None:
category, payload, _ = ast.literal_eval(line)
if category == 'TimingData' and 'Lines' in payload:
for num, data in payload['Lines'].items():
if 'LastLapTime' in data and isinstance(data['LastLapTime'], dict):
lap_time_str = data['LastLapTime'].get('Value')
if lap_time_str and (lap_time := parse_lap_time(lap_time_str)) and lap_time < state.fastest_lap_info.time:
driver_info = state.drivers_data[num]
driver_abbreviation = driver_info['abbreviation']
team_key = driver_info['team_key']
state.set_fastest_lap(lap_time, driver_abbreviation, team_name)
state.set_session_lead(driver=driver_abbreviation, driver_number=num, team=team_name)
payload = json.dumps({"driver": driver_abbreviation, "driver_number": num, "team": team_name})
As more and more lines and logic started to take form, this main loop became the heart of the service, continuously reading new lines from the cache file and feeding them to the correct processing functions.
# from the main loop
with open(cache_file, 'r', encoding='utf-8', errors='replace') as f:
logging.info(f"DRS {DRS_VERSION} started {session_state.session_type} session. Reading live data from '{cache_file}'...")
f.seek(0, 2)
while True:
line = f.readline()
if not line:
time.sleep(0.1)
else:
try:
f1_utils.process_session_data_line(line, session_state, mqtt)
race_lead_process(line, session_state, mqtt) # this function is set depending on session (practice, qualifying or race)
f1_utils.process_race_control_line(line, session_state, mqtt)
except Exception as e:
logging.error(f"Error processing line: {e}")
As I got more and more information from more and more sessions I started to get a more complete picture of the "API", and soon I had logic for resetting between qualifying sessions, safety cars and most of the events that I wanted highlighted in my setup.
Challenge 2: The Testing.
A great challenge in my development was testing. To begin with, the only way to test the logic was during the live sessions. This meant that in the early stages I had to decide if I wanted to mainly watch the action on screen, or action in data. I decided relatively quickly to start gathering a little library of test strings fetched from the live sessions, and these proved invaluable during the development to test and verify business logic of the service. I still couldn't test unknown events, though. So I had to be there by my computer every session to log, and hope that as much as possible would happen.
Soon I could then do some testing as I developed. It was slow and tedious, but would ensure that when I made changes to the scripts I knew that the whole thing wouldn't be completely broken by the start of next session. With this "line library" I could also start writing unit tests for the application using pytests for even more robust development. This allowed me to create a suite of tests to validate core logic, like ensuring a yellow flag message correctly updates the system's state:
def test_yellow_flag(state: SessionState, mock_mqtt: Mock):
"""Testing when yellow flags are raised"""
# Set yellow flag in sector 2
yellow_flag_line = "['RaceControlMessages', {'Messages': {'56': {'Utc': '2025-07-05T11:39:56', 'Category': 'Flag', 'Flag': 'YELLOW', 'Scope': 'Sector', 'Sector': 2, 'Message': 'YELLOW IN TRACK SECTOR 2'}}}, '2025-07-05T11:39:56.262Z']"
process_race_control_line(yellow_flag_line, state, mock_mqtt)
# Assert
## States
assert state.race_state == 'YELLOW'
assert 2 in state.yellow_flags
## MQTT
mock_mqtt.queue_message.assert_called_once()
expected_payload = json.dumps({"flag": "YELLOW", "message": "YELLOW IN TRACK SECTOR 2"})
mock_mqtt.queue_message.assert_called_with(MqttTopics.FLAG_TOPIC, expected_payload)
and even better yet, I now have a full simulation run with a "whole session" full of events so I can test everything in an orderly fashion: Lead change, flags and safety cars out and in!
# SIMULATION_EVENTS holds all the event lines to go through
for event in SIMULATION_EVENTS:
time.sleep(DELAY_SECONDS)
description = event['description']
line_to_write = event['line']
logging.info(f"SIMULATING: {description}")
f.write(line_to_write + '\n')
f.flush()
Challenge 3: The Nobel Prize The Syncing.
The tool was starting to get quite stable, and I could actually let the tool run while I fully watched the action on TV when the lights suddenly started flashing yellow. I sighed and walked over to the PC to see what was wrong when I hear all the commotion from the TV: Yellow flag is out. For a moment I see things ticking in on the PC before they happened on the TV!
It took a moment before I understood: I could see into the future, and my computer was a time vortex bending reality. I rushed to the phone to call the Nobel committee when another thought struck: I rely on the same data that the broadcasters do. I just have to send this data across my local network while the broadcasters have to process, add graphics, composite it all together to a broadcast before pushing it out. There was also a small chance this wasn't a groundbreaking new discovery that would redefine physics, but simply a syncing issue.
After some more research, I had to face the fact that this was yet again not going to be the year I won a Nobel Prize, as it was, in fact, a sync issue. And we weren't talking about a couple of seconds off, but up to a minute! The solution for this one was pretty self-explanatory: a simple PUBLISH_DELAY config variable that would queue the messages as they came in so they would broadcast at the right moments without missing any action. That would have been it, had the delay just been a constant.
The delay seemed to vary from week-to-week, day-to-day or just depending on the mood of my cats. I set a delay in seconds and just had to HOPE that it was close enough when the session starts, because as any F1 fan knows: Seconds are a LONG time in racing. When the delay was set just right it was an epic experience with the lights reacting in sync with the action on TV, but if it was off by even a second it felt like a jarring mismatch of the drummer who cannot keep the beat.
In my frustration I went through several grand ideas: What if I could somehow query the broadcasting delay from the publisher, or if the PC could be watching the race too and use image analysis to detect when things happen on that broadcast to calculate the delay. It all felt a lot more complicated than I thought it ought to be, that was when a new idea struck me: Instead of it being a one way street of information, what if I could adjust the delay mid-race from my phone? Even better, what if I could find sync points from the good old live cache?
# The Session info
['SessionInfo', {'Meeting': {'Key': 1268, 'Name': 'Italian Grand Prix', 'OfficialName': 'FORMULA 1 PIRELLI GRAN PREMIO D�ITALIA 2025', 'Location': 'Monza', 'Number': 16, 'Country': {'Key': 13, 'Code': 'ITA', 'Name': 'Italy'}, 'Circuit': {'Key': 39, 'ShortName': 'Monza'}}, 'SessionStatus': 'Started', 'ArchiveStatus': {'Status': 'Generating'}, 'Key': 9912, 'Type': 'Race', 'Name': 'Race', 'StartDate': '2025-09-07T15:00:00', 'EndDate': '2025-09-07T17:00:00', 'GmtOffset': '02:00:00', 'Path': '2025/2025-09-07_Italian_Grand_Prix/2025-09-07_Race/', '_kf': True}, '2025-09-07T13:03:34.805Z']
# The gold nugget, notice how it is 3 min after race start meaning it is the exact moment they go lights out!
['SessionData', {'StatusSeries': {'4': {'Utc': '2025-09-07T13:03:34.805Z', 'SessionStatus': 'Started'}}}, '2025-09-07T13:03:34.805Z']
And with that I had a relatively simple and easy solution for a problem that had plagued me for quite a while: We set a good guestimate delay in the config, the PC detects the race start, I see the race start on TV and press a button: Simply Lovely. On top of that I have some more buttons that allow me to adjust the delay more granularly in different intervals, giving close to perfect precision for the delay!
def _on_message(self, client, userdata, msg):
"""Handles incoming MQTT messages. Mainly for DRS CONTROL"""
try:
command = msg.payload.decode('utf-8')
if command == "CALIBRATE_START":
self.command_queue.put("CALIBRATE_START")
elif command.startswith("ADJUST:"):
"""Split incoming message, make it a value and do a simple calculation for new delay"""
_, value_str = command.split(":")
adjustment = float(value_str)
current_delay_sec = self.publish_delay.total_seconds()
self.set_delay(current_delay_sec + adjustment)
And there we have it: a Python service born from a mountain of cryptic data, hardened by a robust testing suite, and perfected with a solution to a time-bending sync problem. The engine is built, tested, and ready to go.
In the final part of this series, we'll leave the terminal behind and jump into Home Assistant. I’ll share the YAML, scripts, and automations needed to bring this data to life and finally get our smart lights reacting to the race.
interested in checking out the full code, maybe even try it yourself? Check out the Github repo here