I was just saying that there can be a lot of good reasons for downtime. Heck, I use a secondary in my network because sometimes my unraid host starts dnsmasq and it clobbers my adguard container
I was just saying that there can be a lot of good reasons for downtime. Heck, I use a secondary in my network because sometimes my unraid host starts dnsmasq and it clobbers my adguard container
Depending on the client, it can be. The Microsoft page pretty cleanly defines expected dns client behavior [https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/dns-client-resolution-timeouts#what-is-the-default-behavior-of-a-windows-7-or-windows-8-dns-client-when-two-dns-servers-are-configured-on-the-nic](Microsoft learn). There haven’t been any published changes to this that I’ve seen, and it more or less matches my experience. Linux is a lawless land in this respect, but it really boils down to “it might”, so caveat emptor there. That’s also why I suggested a public ad blocking dns server as a secondary, in case multicast dns does its multicast dns thing
No worries, I had the same thought at first and was very confused for a minute
OP already said that their current DHCP solution (the router) can’t push multiple DNS servers. Having a good secondary can be really helpful for things like power blips, maintenance windows, and cats pulling power cables. There are a few solutions that also do ad blocking that can make good secondaries
This would be great except OP said that their router can’t push 2 DNS addresses. Otherwise, ya, redundant services is always best
If you already have pihole in your environment, I would just use that. DHCP is pretty light weight, so the pi should be more than capable, and you don’t want to complicate your core services more than you need to
I use tandoor myself, but mealie is also a solid choice
Be careful with the Intel laptop chips and make sure you understand what you’re getting. My work laptop has an i7 with 12 “cores” but it’s 10 of the low powered e-cores and 2 of the hyperthreaded p- cores, so for heavy applications (like compiling) it’s a glorified dual core i3.
Sounds like maybe you ran it as a container and didn’t mount the document archive externally then updated the container. That would have likely blown away the actual ingested documents but left the Metadata (including the OCR data) where it was, assuming the database was either its own container or mounted externally