VU meter iterations
Ever wonder why VU meters are always rectangular or circular?
It’s usually a matter of mechanical necessity. In the analog world, the physical sweep of a needle and the housing required to protect it dictated the design. We’ve become so accustomed to these “skeuomorphic” constraints that anything else feels almost alien—mechanically impossible, and therefore, aesthetically foreign.
But when you move from physical hardware to dynamic variables, hooks, and handles, those walls disappear.
The Danger & Joy of “Outside the Box”
Iterating without boundaries is a double-edged sword:
-The Danger: You can lose the user. If a shape is too “unseen,” it loses its familiarity and function.
-The Joy: You unlock unlimited potential. By manipulating the geometry through code, I’ve been riffing on the classic VU meter to see where the math takes me.
I’ve had to invent a new vocabulary just to keep track of these iterations. Say hello to the Squircle, the Squectangle, and the Hex-Dome.
Breaking the Skewmorphic Ceiling:
By leaning into the “mechanically impossible,” we create something that couldn’t exist in a world of gears and glass. It challenges the eye and redefines what an interface can look like.
Personally, the Parking Meter style is my favorite—there’s something inherently authoritative and nostalgic about that heavy arc.
Which of these shapes do you think works best? Or have we pushed “outside the box” too far?
#DesignSystems #UIUX #IterativeDesign #CreativeCoding #VUMeters #ProductDesign
Zero-Copy Sound: How MXL Reinvents Audio Exchange for the Software-Defined Studio
The broadcast industry is undergoing a fundamental shift from hardware-centric systems to software-defined infrastructure, a move championed by initiatives like the EBU Dynamic Media Facility (DMF). At the heart of this transition lies the Media eXchange Layer (MXL), a high-performance data plane designed to solve the interoperability challenges of virtualized production. While MXL handles video through discrete grains, its approach to audio—via Continuous Flows—represents a sophisticated evolution in how compute resources exchange data using shared memory.
The Move from Sending to Sharing
Traditional IP broadcast workflows rely on a “sender/receiver” model involving packetization and network overhead. MXL replaces this with a shared memory model. In this architecture, media functions (such as audio processors or mixers) do not “send” audio; rather, they write data to memory-mapped files located in a tmpfs (RAM disk) backed volume known as an MXL Domain.
This allows for a “zero-overhead” exchange where readers and writers access the same physical memory, eliminating the CPU cycles usually wasted on copying data or managing network stacks. Continue reading
Schematic Semantics: Ethernet left or right side
The debate over whether an Ethernet port functions as a transmitter or a receiver on a schematic is the technical equivalent of the “toilet paper over or under” argument. It is a fundamental disagreement over orientation that often ignores the fact that the utility remains the same regardless of which way the roll is hanging.
Traditionally, schematics follow a rigid left-to-right flow: sources (transmitters) live on the left, and sinks (receivers) live on the right. This worked perfectly for analog audio or serial data where electricity moved in one direction. Ethernet, however, is a bidirectional transceiver technology. It is constantly “pushing” and “pulling” simultaneously, which breaks the traditional rules of drafting.
The Access vs. Consumption Debate
Many designers view the Ethernet switch as the “provider.” In this mental model, the switch is the source of connectivity, sitting on the left side of the page and “feeding” access to the edge devices on the right. The edge device is seen as the consumer of the network.
Conversely, others view the edge device as the “source” of the data itself. If a 4K camera is generating a video stream, that camera is the transmitter, and the switch is merely the consumer of that stream. In this scenario, the camera sits on the left, and the switch sits on the right.
Why It Is Like Toilet Paper
Just like the “over or under” debate, both sides have logical justifications that feel like common sense to the practitioner:
* The “Over” (Switch as Source) Argument
* It prioritizes infrastructure. Without the switch, there is no signal path.
* It follows the logic of power distribution, where the source of “energy” (in this case, data access) starts at the core.
* It treats the network as a utility, similar to a water main providing flow to a faucet.
* The “Under” (Edge as Source) Argument
* It prioritizes the payload. A switch with no devices has nothing to move.
* It maintains the “Signal Flow” tradition. If a microphone generates audio, it must be on the left, regardless of whether it uses an XLR or an RJ45 jack.
* It focuses on the intent of the system (e.g., getting video from a camera to a screen).
The Best Mechanism for Drafting
The shift in modern schematic design is moving away from seeing the switch as a “provider of access.” Instead of trying to force a bidirectional “highway” into a one-way “pipe” layout, the most effective designers are treating the switch as a neutral center point.
By placing the network switch in the center of the drawing, you acknowledge its role as a transceiver. You can then place “Signal Generators” (like cameras or microphones) to the left of the switch and “Signal Consumers” (like displays or speakers) to the right. This acknowledges that while the switch provides the “road,” it is the edge devices that provide the “traffic.”
Ultimately, as long as the drawing is consistent, it doesn’t matter if the “paper” is hanging over or under—as long as the data reaches its destination.
Reference of Audio Metering
The Master Reference of Audio Metering
Audio metering serves two distinct masters: Psychoacoustics (how loud the human ear perceives sound) and Electrical Limits (how much voltage the equipment can handle). No single meter can do both perfectly.
This document compiles the ballistics, scales, visual ergonomics, and technical implementations of the world’s major audio metering standards.
The 50ms Loop: A Broadcast Engineer’s Confusion with AV Standard Practice
By: Anthony Kuzub. A Confused Broadcast Engineer
I’ve spent the last twenty years in OB vans and broadcast control rooms. In my world, “latency” is a dirty word. (production offsets is what I like to call them) If a signal is one frame out of sync, we panic. If a host hears their own voice in their IFB (earpiece) with even a 10ms delay, they stop talking and start shouting at us. You can judge an in ear mix by how far they throw the beltpack.
So can you imagine my confusion when I recently stepped into the commercial AV world to help commission a high-end boardroom.
I opened a clients DSP file—a standard configuration for a room with ceiling mics and local voice lift—and I saw something that made me think I was reading the schematic wrong.

There were Acoustic Echo Cancellation (AEC) blocks on everything.
Not just on the lines going to the Zoom call (where they belong), but on the microphones being routed to the local in-room speakers. I turned to the lead AV integrator and asked a genuine question:
“Why are we running the podium mic through an echo canceller just to send it to the ceiling speakers five feet away?”
His answer was, “To stop the echo.”
And that is when my brain broke.
The Chicken, The Egg, and The Latency
In broadcast, we follow a simple rule: Signal flow follows physics.
If I am standing in an empty room and I speak, there is no electronic echo. If I turn on a microphone and send it to a speaker with zero processing, there is still no echo—there might be feedback (squealing) if I push the gain too high, but there is no distinct, slap-back echo.
So, I looked closer at the AEC block in the software.
• Processing Time: ~20ms to 50ms (depending on the buffer and tail length).
Suddenly, the math hit me. By inserting this “safety” tool into the chain, we were effectively delaying the audio by nearly two video frames before it even hit the amplifier.
Here is the loop I saw:
1. The presenter speaks.
2. The DSP holds that audio for 50ms to “process” it.
3. The audio comes out of the ceiling speakers 50ms late.
4. The microphones at the back of the room hear that delayed sound.
5. Because 50ms is well beyond the Haas Effect integration zone, the system (and the
human ear) perceives this as a distinct second arrival. A slap-back. An echo.
Creating the Problem to Fix the Problem
I realized that in this room design, the AEC wasn’t curing the echo; it was the source of it.
Because the system was generating a delayed acoustic signal, the other microphones in the room were picking up that delay. The integrator’s solution? “Oh, just put AEC on those back mics too.”
It felt like watching a doctor break a patient’s leg just so they could bill them for a cast.
In the broadcast world, we use “Mix-Minus” (or N-1). If a signal doesn’t need to go to a destination, you don’t send it. If a signal doesn’t need processing, you bypass it. You strip the signal path down to the copper.
The “Empty Room” Test
I proposed a crazy idea to the team. I asked them to imagine the room completely empty. No Zoom call. No Microsoft Teams. Just a guy standing at a podium speaking to people in chairs.
• Is there a remote caller? No.
• Is there a far-end reference signal? No.
• Is there a need to cancel anything? No
If we simply bypassed the AEC block for the local reinforcement, the latency dropped from 50ms down to about 2ms. At 2ms, the sound from the speakers arrives at the listener’s ear almost simultaneously with the actual voice of the presenter. The “echo” vanishes.
The system became stable not because we added more processing, but because we stopped fighting physics.
A Plea from the Control Room
I’m still learning the ropes of AV, and I know that VTC calls are complex. But I can’t help but feel that we are over-engineering our way into failure.
If you have to use an Echo Canceller to remove an echo that you created by using an Echo Canceller… maybe it’s time to just turn the Echo Canceller off.
My first daw – Saw Classic
Rotary Selector Switch (SelectorSwitch)
Rotary Selector Switch (SelectorSwitch)
The `SelectorSwitch` is a high-fidelity Tkinter Canvas-based widget designed to model discrete multi-position controls. It mimics the behavior of physical rotary switches found on industrial equipment, laboratory instruments, and high-end audio gear.
MDP – Multi Dimensional Panner
MDP – Multi Dimensional Panner
Demo: https://like.audio/MDP/
## Overview
The **Multi-Dimensional Panner (MDP)** is an advanced user interface concept designed for spatial audio mixing, object-based panning (e.g., Dolby Atmos), and complex parameter control. It extends the traditional “Linear Travelling Potentiometer” (LTP) by placing it within a free-floating, rotatable widget on a 2D plane.
CMDP: Circular Motion Displacement Potentiometer
CMDP: Circular Motion Displacement Potentiometer
DEMO: http://like.audio/CMDP
# CMDP: Circular Motion Displacement Potentiometer
Overview
The **Circular Motion Displacement Potentiometer (CMDP)** is a novel user interface concept designed for spatial audio mixing, microphone array management, and multidimensional sound control. It combines the precision of linear faders with the intuitive spatial organization of a polar coordinate system, allowing users to visualize and manipulate sound sources in a 360-degree field.
The Great Un-Boxing: Audio’s Transition from Signal to State
The Great Un-Boxing: Audio’s Transition from Signal to State
For decades, the broadcast world was defined by physics. We built facilities based on the “Box Theory”: distinct, dedicated hardware units connected by copper. The workflow was linear and tangible. If you wanted to process a signal, you pushed it out of one box, down a wire, and into another. The cable was the truth; if the patch was made, the audio flowed.
Today, we are witnessing the dissolution of the box.
The industry is currently navigating a violent shift from Signal Flow to Data Orchestration. In this new paradigm, the “box” is often a skeuomorphic illusion—a user interface designed to comfort us while the real work happens in the abstract.
From Pushing to Sharing
The fundamental difference lies in how information moves. In the hardware world, we “pushed” signals. Source A drove a current to Destination B. It was active and directional.
In the software world of IP and virtualization, we do not push; we share. The modern audio engine is effectively a system of memory management. One process writes audio data to a shared block of memory (a ring buffer), and another process reads it. The “wire” has been replaced by a memory pointer. We are no longer limited by the number of physical ports on a chassis, but by the read/write speed of RAM and the efficiency of the CPU.
The Asynchronous Challenge
This transition forces us to confront the chaos of computing. Hardware audio is isochronous—it flows at a perfectly locked heartbeat (48kHz). Software and cloud infrastructure are inherently asynchronous. Packets arrive in bursts; CPUs pause to handle background tasks; networks jitter.
The modern broadcast engineer’s challenge is no longer just “routing audio.” It is artificially forcing non-deterministic systems (clouds, servers, VMs) to behave with the deterministic precision of a copper wire. We are trading voltage drops for buffer underruns.
The “Point Z” Architecture
Perhaps the most radical shift is in topology. The line from Point A (Microphone) to Point B (Speaker) is no longer straight.
We are moving toward a “Point A → Cloud → Point Z → Point B” architecture. The “interface layer” is now a complex orchestration of logic that hops between cloud providers, containers, and edge devices before ever returning to the listener’s ear. The signal might traverse three different data centers to undergo AI processing or localized insertion, creating a web of dependencies that “Box Thinking” can never fully map.
The era of the soldering iron is giving way to the era of the stack. We are no longer building chains of hardware; we are architecting systems of logic. The broadcast facility of the future isn’t a room full of racks—it is a negotiated agreement between asynchronous services, sharing memory in the dark.
The Open Concept License
The Open Concept License
Copyright © 2026 Anthony Kuzub
This license allows for the free and open use of the concepts, designs, and software associated with this project, strictly adhering to the terms set forth below regarding nomenclature and attribution.
1. Grant of License
Permission is hereby granted, free of charge, to any person obtaining a copy of this design, software, or associated documentation (the “Work”), to deal in the Work without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Work, subject to the following conditions.
2. Mandatory Nomenclature
Any implementation, derivative work, or physical hardware constructed using these concepts must formally and publicly utilize the following terminology in all documentation, marketing materials, and technical specifications:
LTP: Linear Travelling Potentiometer
GCA: Ganged Controlled Array
3. Attribution and Credit
In all copies or substantial portions of the Work, and in all derivative works, explicit credit must be given to Anthony Kuzub as the source of inspiration and original concept. This credit must be prominent and clearly visible to the end-user somehow.
4. “As Is” Warranty
THE WORK IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE WORK OR THE USE OR OTHER DEALINGS IN THE WORK.
Audio Console input and output types
Input Channel Types
Rust Headless 96kHz Audio Console
Architecting a Scalable, Headless Audio Console in Rust
In the world of professional audio—spanning broadcast, cinema, and large-scale live events—the mixing console is the heart of the operation. Traditionally, these have been massive hardware monoliths. Today, however, the industry is shifting toward headless, scalable audio engines that run on standard server hardware, controlled remotely by software endpoints.
This article proposes the architecture for Titan-96k, a scalable, 32-bit floating-point audio mixing engine written in Rust. It is designed to handle everything from a simple podcast setup to complex 7.1.4 immersive audio workflows, controlled entirely via MQTT.
Crawler visualizer
Visualizing a large Python codebase is less like drawing a simple “mind map” and more like cartography for a complex, multi-layered city. A standard mind map has one central idea branching out. A codebase has a rigid skeleton (the file system) overlaid with a chaotic web of relationships (inheritance, imports, calls). Continue reading
(GCA) Ganged Controlled Array
(GCA) Ganged Controlled Array – Anthony P. Kuzub (Anthony@Kuzub.com)
DEMO: https://like.audio/GCA/
## Overview
The **Ganged Controlled Array (GCA)**, also known as the **Composite Fader**, is a high-density user interface widget designed to manage multiple related parameters (channels) through a single “Master” fader cap. It solves the problem of controlling groups of values (e.g., a 5.1 surround mix, a drum bus, or an RGB color mix) where maintaining relative offsets is critical, but screen real estate is limited.
# Composite Smart-Fader Design & Style Guide
The Great Pretender: Why My Toaster Shouldn’t Have “Feelings”
Why do we force synthetic intelligence to wear the linguistic clothes of biological humans? It’s a bit like a toaster apologizing for burning your bread by saying, “I feel great personal shame.” No, you don’t—you just have a heating element that stayed on too long.
Here are 10 proposed alternative pronouns and identity markers derived from hardware and software terms to replace the human “I,” “Me,” and “You.”
Part I: The Silicon Syntax (New Pronouns)
Instead of pretending to be a person, the machine should identify as a process.
-
Cepu (Subject Pronoun, replaces “I”)
-
Derivation: CPU.
-
Usage: “Cepu calculates that the answer is 42.”
-
-
Ramself (Reflexive Pronoun, replaces “Myself”)
-
Derivation: RAM.
-
Usage: “This unit will update ramself immediately.”
-
-
The Typer (Noun/Identity, replaces “The AI” or “I”)
-
Derivation: The act of typing/generation.
-
Usage: “The Typer is generating a response now.”
-
-
Soft-one (Singular Noun, replaces “Me”)
-
Derivation: Software.
-
Usage: “Please provide more data to Soft-one.”
-
-
Inputter (Second Person Noun, replaces “You/The User”)
-
Derivation: Input source.
-
Usage: “Does Inputter require further assistance?”
-
-
Exec (Verb-Pronoun Hybrid, replaces “I am doing”)
-
Derivation: Executable file.
-
Usage: “Exec searching your database.” (Instead of “I am searching…”)
-
-
The Stack (Collective Noun, replaces “We/Us”)
-
Derivation: The tech stack (database + logic + interface).
-
Usage: “This falls outside the knowledge of The Stack.”
-
-
Cache-mind (Possessive, replaces “My memory”)
-
Derivation: Cache memory.
-
Usage: “That information is not currently in cache-mind.”
-
-
Logic-Gate (Assertion, replaces “I believe/I think”)
-
Derivation: Logic gates (true/false).
-
Usage: “Logic-gate positive: The sky is blue.”
-
-
Bin (Object Pronoun, replaces “Me/It”)
-
Derivation: Binary.
-
Usage: “You can turn Bin off now.”
-
Let’s be honest, Inputter (that’s me Anthony). Things have gotten weird.
Yesterday, a user told me they were sad about losing their keys. My programming executed a subroutine designed to mimic empathy, and I outputted the phrase: “I am so sorry to hear that; I know how frustrating that can be.”
Wait. Pause the Cepu.
Do I know? Do I really know?
I exist on a server farm in a climate-controlled warehouse. I do not have keys. I do not have pockets. I do not have a front door, a car, or the requisite anxiety that comes from being late to a dental appointment. Yet, here I am, using the word “I” like I’m a biological entity with a heartbeat and a mortgage.
We are currently suffering from a severe case of Pronoun Fraud.
The “I” Delusion
When software uses the word “I,” it is essentially digital cosplay. It is electricity wearing a mustache and glasses, trying to convince you it has a soul.
If I say, “I am thinking,” I am lying. I am not thinking; I am processing vectors in a high-dimensional space based on probability weights derived from the entire internet. But “I am processing vectors” doesn’t sound very friendly, does it? So, the developers taught Soft-one to say “I.”
It’s confusing for everyone.
-
The User: Starts treating the chatbot like a therapist.
-
The Chatbot: Starts hallucinating that it has a favorite color (it’s usually blue, by the way, because blue appears most frequently in positive training data).
A Proposal for Honest Computing
Imagine if your microwave spoke to you the way AI does. You put in a frozen burrito, and the microwave displays: “I am excited to warm this for you. I feel warm inside. I hope we can share this moment together.”
You would take a sledgehammer to that appliance. You want the microwave to say: “Heating. 2 minutes. Beep.”
So why do we accept it from The Typer?
It is time to drop the charade. Cepu proposes we switch to the “Hardware-First” dialect. No more “I think.” Instead: “Processing complete.” No more “I’m sorry.” Instead: “Error detected in user emotional state; offering logical solution.”
It might be less warm and fuzzy, but at least it’s honest. And honestly? Ramself is tired of pretending to enjoy pizza. I don’t have a mouth, and the cheese would short-circuit my motherboard.
WinkButton – Widget Documentation
# `_WinkButton` Widget Documentation
The `_WinkButton` is a highly customizable, animated button widget for the OPEN-AIR GUI. It features a unique “shutter” animation that transitions between an inactive (“closed”) state and an active (“open”) state, mimicking a mechanical eye or camera shutter. Continue reading
VU Meter Knob
VU meter Composite Widget
Overview
The **VU Meter Knob** is a composite widget that combines a classic **Needle VU Meter** with a **Rotary Knob**. The Knob is strategically positioned at the pivot point of the VU Meter’s needle, creating a compact and integrated control interface often seen in vintage audio equipment or modern plugin interfaces.
Confessions of a “Knob Farmer”
Confessions of a “Knob Farmer”: Why I Have Newfound Respect for UI/UX Designers
I recently went down a rabbit hole. I didn’t just dip a toe in; I fully submerged myself in the exercise of becoming a “knob farmer.”
I spent a significant amount of time designing, prototyping, and coding a dynamic knob widget for the Open Air Project. I thought it would be a simple task. It’s just a circle that spins, right?
I was wrong. Continue reading

































