Replacing Ring with PoE cameras and a self-hosted network video recorder


An NVR (network video recorder) is the software stack that pulls in camera streams, writes clips to disk, and serves a timeline you can scrub in a browser. Frigate is the one I run. This post is about replacing a Ring-shaped habit with that kind of recorder, on hardware I control.

Ring-style products got the UX right: clip, scrub bar, audio in the clip. I did not want the usual bundle: subscription, someone else’s cloud, policy drift, or battery math if I refuse to open the wall.

Power over Ethernet (PoE) sold the hardware side: one cable to a fixed camera for power and data, no charger in the mulch, no guessing whether the mesh reaches the eave. I still wanted motion clips and a browser UI without handing archive keys to a vendor. That is local infrastructure replacing the Ring-shaped habit, not a purity contest about self-hosting.

If you are already self-hosting, you may still see Review or Live with no sound, grey daylight clips, or the stack “randomly” losing the camera after a reboot. Those often trace to everything except the recorder UI you are staring at.

The wiring story: a small Linux box on Wi‑Fi for the house and Ethernet to a PoE path for the camera, plus Docker, Frigate, and Neolink. This Reolink model speaks Reolink’s Baichuan protocol on one port and, on my firmware, does not expose plain RTSP (Real Time Streaming Protocol, the usual standard stream format for IP cameras) on port 554. None of it is one clever trick. It is routing, systemd’s network daemon (systemd-networkd), DHCP address assignment, firewall rules, and one encoder flag lining up.


Windows felt hostile; Omarchy on this box did not

I keep Windows for games and a few tools. I do not want it on the recorder role. Lately that OS spends your attention on account pressure, shell recommendations, defaults that reset after updates, feature noise where settings used to be, and reboot cycles when you thought the box could live in a closet. Fixable, but time I would rather spend on routing and retention.

Linux on a small PC is not automatically peaceful either. I wanted a fast path to a sane desktop, power-user defaults, and docs close enough to servers that Frigate, Docker, and systemd-networkd felt normal.

Omarchy on this hardware was that path. Install was straightforward, the result felt intentional instead of OEM-cursed, and I was editing .network units and bringing containers up the same day without a security modal every time I touched a listener. Compared to the last time I tried to make Windows behave like a quiet appliance, Omarchy was boring in the right way. Ring replacement is already enough load; I did not need the host OS acting like another product manager.


What I was optimizing for

  • Frigate writes to disks I control: motion-only retention, longer alert overlap later, export without a vendor export flow. That is the upside I buy with the extra work.
  • PoE plus a dedicated leg: the camera is not on guest Wi‑Fi fighting TVs. The switch or injector path is the physical contract you can label.
  • Segmentation: camera on its own small Ethernet subnet (a /24, i.e. 256 addresses), not on the home Wi‑Fi subnet. The Linux box is the only bridge.
  • Stable DHCP reservation so the camera’s address does not drift. DHCP is what hands out IP addresses; a reservation pins one address to one device. Treat “ffmpeg can read the RTSP URL” and “the browser plays Review with sound” as different acceptance tests.

Topology (two legs, one host)

The host has Wi‑Fi to the house and Ethernet to a dumb switch and the camera.

flowchart TB subgraph Home["Home LAN"] Client["Browser on laptop or phone"] Inet["Rest of home / internet"] end subgraph Host["Linux recorder host"] WifiNIC["Wi‑Fi interface"] EthNIC["Dedicated camera interface\nstatic gateway on /24"] Svc["Docker: Neolink + Frigate\nhost networking"] WifiNIC --- Svc EthNIC --- Svc end subgraph Iso["PoE / isolated switch leg"] SW["Switch (PoE)"] CAM["PoE IP camera"] end Client --> WifiNIC Inet --> WifiNIC EthNIC --> SW --> CAM

Frigate’s UI is on the house side; Neolink and Frigate talk to the camera only on the isolated leg. Keeping cameras off the same broadcast LAN segment as laptops unless you mean to is a cheap win once the subnet exists.


Vendor apps hide the protocol. When you self-host, you are the integration layer. The camera still offers Baichuan on TCP port 9000 here. Neolink logs in and republishes RTSP so normal recorder software can subscribe.

flowchart LR CAM["Camera :9000\nBaichuan"] NL["Neolink\nhost network\ncustom RTSP port"] G2["go2rtc inside Frigate\ndefault RTSP port"] FG["Frigate ffmpeg\ndetect + record"] CAM --> NL NL -->|"RTSP over isolated LAN"| G2 G2 -->|"RTSP loopback + stream pick"| FG

Neolink on a default Docker bridge could not reliably reach the camera subnet once the host firewall (I use UFW, Uncomplicated Firewall) and Docker’s packet filtering (the DOCKER-USER chain) did their job. Host networking for Neolink put it on the same routing table as the camera network interface; Frigate got the same so ffmpeg could hit go2rtc on loopback while the kernel still reached the camera net. go2rtc is the stream multiplexer bundled with Frigate; it already owns the usual RTSP port, so I moved Neolink’s bind port and pointed consumers at the gateway on the isolated /24, not loopback, so one URL works for anything listening on that leg. The low-resolution substream’s session description failed on this body; the main stream worked, so I pay the CPU cost of decoding main for motion detection until I split roles smarter.


systemd-networkd and “helpful” defaults

The Omarchy image ships a broad [Match] for wired interfaces. It also matched the camera network port, so a generic DHCP profile won over the tiny static file for the isolated /24 and the gateway address vanished after reboots. Fix: narrow the generic match, then a dedicated .network for the camera leg: static /24, no DHCP client on that interface, and multicast DNS (mDNS) off so camera-subnet names do not leak onto the house LAN.


DHCP: read your own config twice

dnsmasq hands out DHCP only on the camera interface, with DNS lookups disabled inside dnsmasq so it does not fight port 53 with whatever else the desktop runs. A separate bug: the main config never included the drop-in directory, so dhcp-host reservations were never parsed. After conf-dir (or equivalent) was right, reservations stuck. Pin one address and keep Neolink’s camera stanza in sync; I am not pasting reservation lines here.


Frigate, go2rtc, browser audio

go2rtc stream names should match Frigate camera names. Neolink hands off H.264 video plus PCM audio (raw samples from the camera). Browsers want AAC for Media Source Extensions playback and Opus for WebRTC calls, so go2rtc does the usual second hop to add those tracks. Frigate pulls the restream from loopback and selects the AAC leg; recording uses the generic AAC record preset. Review: person and car, full-frame zone so required_zones stays simple, slightly longer alert-linked retention. Not a full doorbell cloud (no MQTT push stack here), but Ring-shaped habits on disks I own, without renting access to my own porch footage.


Audio was a camera setting, not a recorder mystery

Frigate already muxed AAC; ffprobe still showed silence until the camera’s encoder table had stream audio disabled. Flipping that in firmware or over HTTP (after turning the camera’s built-in web server on via Neolink’s services command) fixed waveforms. Digest authentication tip: do not use curl -f on login; the 401 response is part of the handshake.

flowchart LR subgraph Cam["Camera"] ENC["Encoder: audio on"] PCM["PCM in stream"] end subgraph Edge["Neolink + go2rtc"] NL2["RTSP from Neolink"] G2A["AAC + Opus for browsers"] end subgraph FG["Frigate"] REC["MP4 on disk"] UI["Live / Review"] end ENC --> PCM --> NL2 --> G2A --> REC G2A --> UI

Color vs infrared

Monochrome clips were day/night on Auto dropping to IR. Forcing color in the image API fixes “grey daylight” while debugging; I switch back to Auto when I want IR at night.


Reaching the UI from inside the house

Open Frigate’s HTTP port only where you need it, not on every interface by default. Be deliberate about which interfaces run mDNS. After restarts, expect brief HTTP 500 responses from the reverse proxy while backends wake; that is noise, not a dead camera.


If you repeat this, start here

  1. Draw subnets and ARP (address resolution: who has which IP on which wire) before Docker. No container fixes a missing address on the camera leg.
  2. Assume generic .network matches will eat a second network card unless you narrow them.
  3. Prove the DHCP server actually reads your drop-ins.
  4. Split checks: RTSP test in VLC, files on disk, Live in browser, Review in browser.
  5. When the recorder disagrees with reality, query the camera’s HTTP API for encoders and day/night.

Closing

The hard part was not Frigate. It was the host routing, DHCP actually loading, Docker seeing the right interface, and the microphone bit in the encoder being on. After that, the stack looked obvious and clips sounded like the room.

PoE, a fixed camera, local playback with sound, retention I can explain without a brand portal: that is what I wanted over Ring for this spot. The cost is owning the integration, the subnet, and the failure modes. For me that trade is the point.