I’m just this guy, you know?
#### MAXLOGAGE=24.0
Up to how many hours of queries should be imported from the database and logs? Values greater than the hard-coded maximum of 24h need a locally compiled `FTL` with a changed compile-time value.
I assume this is the setting you are suggesting can extend the query count period. It still will only give you the last N hours’ worth of queries, which is not what OP asked. I gather OP wants to see the cumulative total of blocked queries over all time, and I doubt the FTL database tracks the data in a usable way to arrive at that number.
So, like a running sum? No, I don’t think so, not in Pi-hole at least.
Pi-hole does have an API you could scrape, though. A Prometheus stack could track it and present a dashboard that shows the summation you want. There are other stats you could pull as well. This is a quick sample of what my home assistant integration sees
Unless I misunderstand your question, draw.io can be downloaded as a standalone Linux application and run locally.
Likewise, the Xfig package should he available in most Linux repos. It’s old, but good enough for a quick sketch.
edit: aha. My mistake. My eyes slid over ‘open source’ in the title*, and even still I hadn’t realized it was an Apache license.
* Whaaat, it was pre-coffee? Let the purest among us cast the first stone.
Functionally they’re no different. LMDE draws its packages from Debian (probably stable) repos while mainline Mint draws from Ubuntu’s. So yes, Mint will have overall newer packages than LMDE but it’s generally rare for that to affect your ability to get work done unless some new feature you were waiting for gets introduced.
Ubuntu is the Enterprise fork of Debian backed by Canonical, and as such have contributed some controversy into the ecosystem.
Ubuntu leverages Snap packages which are considered ‘bloaty’ and ‘slow’ by a plurality of people with opinions on these matters. They work. Mint incorporates the Snap store into their package management. You might just need to turn it on in the settings.
With mainline Mint you get new base OS packages with Ububtu’s release cycle, and the Snap store.
In the case of LMDE then, you can run a stable base OS on Debian’s rock-solid foundation, their release cycle, and still get your fresh software from the Snap store.
IMO, they’re the same for like 85% of use cases. I find I end up going to extra measures to disable certain Ubuntu-isms on my own systems that run it, effectively reverting it to Debian by another name.
As a student and occasional gamer, the trade off is having a stable base for your learning needs, and still be able to get the latest user desktop apps from Snap.
Oh, shoot. If you’re gonna roll your own then that’s probably the better play because at least then the firmware won’t be all locked down and you can pick known-compatible parts. Get it with no OS and sort it out later if you need to.
It’s easy enough to buy a Windows license key later on if you need it. The school night even make it available you at a student discount. Boot it from a USB drive, even.
Heck, I ran Linux on my college computers back in the 90s. It was just a thing you did. Ah, memories…
Anyhoo, it largely depends on the school but for most intents and purposes Windows, Mac and Linux are interoperable. By that I mean they can generally open, manipulate and share all of the common document formats natively, with some minor caveats.
Many schools also have access to Microsoft O365, which makes the MS Office online suite available as well. All you really need to use that is a web browser.
I work in an office environment these days where Windows, Mac and Linux are all well supported and are in broad use. I use Linux (Debian) exclusively, my one coworker is all-windows and a third is all-mac. Our boss uses Windows on the desktop, but also uses a Macbook. We are able to collaborate and exchange data without many problems.
I would say the two main challenges you’re liable to face will be when Word files include forms or other uncommon formatting structures. LibreOffice is generally able to deal with them, but may mangle some fonts & formatting. Its not common but it does happen.
The other main challenge could be required courseware-- specialized software used in a curriculum for teaching-- and proctor software for when you’re taking exams online. Those might require Windows or Mac
If it ever comes up, Windows will run in a Virtual Machine (VM) just fine. VirtualBox by Oracle is generally free for individual use, and is relatively easy to start up. Your laptop will probably come with Windows pre-installed, so you could just nuke it, install Linux, install VirtualBox, and then install Windows as a VM using the license that came with your laptop. You’d need to ask an academic advisor at the school if that’s acceptable for whatever proctor software they use.
I recommend against dual-booting a Windows environment if you can avoid it. Linux & Windows are uneasy roommates, and will occasionally wipe out the other’s boot loader. It’s not terribly difficult to recover, but there is a risk that could (will) happen at the WORST possible moment. However, it might be unavoidable if they use proctor software that requires windows on bare metal. Again, you’d have to ask the school.
Good luck!
Secure file transfers frequently trade off some performance for their crypto. You can’t have it both ways. (Well, you can but you’d need hardware crypto offload or end to end MACSEC, where both are more exotic use cases)
rsync is basically a copy command with a lot of knobs and stream optimization. It also happens to be able to invoke SSH to pipeline encrypted data over the network at the cost of using ssh for encrypting the stream.
Your other two options are faster because of write-behind caching in to protocol and transfer in the clear-- you don’t bog down the stream with crypto overhead, but you’re also exposing your payload
File managers are probably the slowest of your options because they’re a feature of the DE, and there are more layers of calls between your client and the data stream. Plus, it’s probably leveraging one of NFS, Samba or SSHFS anyway.
I believe “rsync -e ssh” is going to be your best over all case for secure, fast, and xattrs. SCP might be a close second. SSHFS is a userland application, and might suffer some penalties for it
I used to selfhost more, but honestly it started to feel like a job, and it was getting exhausting (maybe also irritating) to keep up with patches & updates across all of my services. I made decisions about risks to compromise and data loss from breaches and system failures. In the end, In decided my time was more valuable so now I pay someone to incur those risks for me.
For my outward facing stuff, I used to selfhost my own DNS domains, email + IMAP, web services, and an XMPP service for friends and family. Most of that I’ve moved off to paid private hosting. Now I maintain my DNS through Porkbun, email through MXroute, and we use Signal instead of XMPP. I still host and manage my own websites but am considering moving to a ghost.org account, or perhaps just host my blogs on a droplet at DO. My needs are modest and it’s all just personal stuff. I learned what I wanted, and I’m content to be someone else’s customer now.
At home, I still maintain my custom router/firewall services, Unifi wireless controller, Pihole + unbound recursive resolver, Wireguard, Jellyfin, homeassistant, Frigate NVR, and a couple of ADS-B feeders. Since it’s all on my home LAN and for my and my wife’s personal use, I can afford to let things be down a day or two til I get around to fixing it.
Still need to do better on my backup strategies, but it’s getting there.
Gandi changed their TOS and price structure last year, so I ported everything over to Porkbun for a small savings, but mostly as a big middle finger to Gandi.
If you’re gonna get banged that kind of cash for functions you’re already using, you may as well look at better registrars, and get better value for your spend.
Shop around.
I appreciate the pun!
For home networks, I agree there’s usually not a need. I do it for portability reasons: I always use 192.168.0.0/24 addresses (192.168.0.0 - 192.168.0.255) for services I’m hosting on prem at home. In general, my home router is a Linux box connected directly to my ISP’s network on one interface and a switch with several VLANs on its other interface, and which has IP forwarding enabled with IP masquerade. I also use IPv6 a fair amount and receive a healthy whack of addresses from my ISP that I delegate out portions of to each VLAN. By my count I have 6 or 8 active VLANs on my home net for the adults, for work, for the kids, for the central services, for isolating untrustworthy IOT doodads, for infrastructure management, and for guests.
Most of my so-called central services have been hosted on the same Linux box that does the routing, using containers bount do tjose subnet zero IPs on the loopback. It skeeves me out a bit to do that though, so I’ve been moving that stuff over to a new applications server in a DMZ VLAN. I know what I’m doing, but I’m also incurring unnecessary risks having structured my service hosting the way I have.
The IP-on-loopback trick let’s.me move those services from a VIP on the router to an IP on the new service host without having to reconfigure everything. I just fake in some /32 routes where I need to, and the traffic goes where I want of to.
I admit up front this isn’t great discipline, but as I said I know what I’m doing and it only sounds crazy to me when I try to explain it to other people. Lol.
I do this, but I also work in tech and have a pretty solid grasp of routing and how that all works. I agree it may seem overkill for many installs, but makes sense for certain use cases. I’ll try to explain without writing a book. I’ll be glossing over a LOT of texture in the following…
In networking, a router is considered to be a node in a graph with multiple host IP addresses, one for each edge. It has an interface-- sometimes physical but more often viirtual-- on each edge (network segment, VLAN) that connects to it, and which usually serves as the gateway IP for that edge. In larger networks where there is more than a single router, the routers must all tell each other which router has which destination network segment, so they all speak a routing protocol like RIP, OSPF or IS-IS. Each of the speakers must be able to identify itself uniquely among the others so the others know which node is making what advertisements. To do this, they each are assigned a unique router ID, which is normally a 32 bit integer value represented as a dotted quad. Customarily this is an IP address, and the protocols further this idea by adopting the highest numbered IP address on the device or the addrss of its loopback interface, if defined.
The point of a routing protocol is for the participating nodes to advertise IP ranges associated with their connected edges. They assert advertisements for each edge when it is active.(I.e., the interface is UP) and withdraw or expire them when the edge is unavailable (I.e., the interface is in any state other than UP). Every time an edge changes state-- goes from UP to not-UP, or not-UP to UP-- that advertisement must propagate across the whole system, and every node must stop forwarding traffic to recalculate its own best path to the remaining available edges. This is called reconvergence, and network engineers try to do things to minimize the number and frequency of these events.
Practically, one of the things network engineers do to try to avoid instability is not having the ID of a speaker change dynamically. Going back to how the device selects its router ID, it considers the loopback IP first, or else the highest numbered IP active on the device at the time of evaluation. Edge interfaces can go UP or not-UP for any number of reasons at any time, thus they are less than ideal to use for the router ID. The loopback interface by contrast is always up. This interface is typically assigned the IP the routing protocol will use for its router ID.
In practice, the loopback is the only interface on a router than can be said to belong to the router itself¹, and not to an edge connected to the router². There are other practical reasons in routing to do this, but they all come back to the fact that the loopback is always up, and therefore it’s always apt to be advertised as an available edge.
So what does any of this have to do with servers, applications and self hosting?
Applications that provide services over the network, DNS servers for example, need to bind to at least one IP address and protocol port. On servers with multiple interfaces, these applications normally bind to all available interface addresses, using the address 0.0.0.0. In some situations this might be undesirable. Maybe you don’t want your pihole serving your internal DNS to your ISP, or maybe you have several VLANs at your house and want to use a single IP address for DNS across all of your VLANs, or you don’t trust the VLAN interface IPs to always be the same.
Adding an IP to the lo
interface ensures that IP is always available and reachable. It provides a single place for all hosts in the system to go that isn’t pinned to any one of the possible VLAN interface IPs.
In my own home setup, I define several IPs on the loopback for different containers that all want to use port 8443/tcp for their public port. This gives me the flexibility of being able to assign different services their own IP (which I can then reference by name in DNS) on their native port vis-à-vis the documentation. So my Pihole container has its address and my Unifi controller container has its own as well.
Anyway, this is very much a Done Thing in the industry. Not everyone needs it, but its a useful technique in the right circumstances.
I maintained an ejabberd server for myself and a few friends for many years. The config language was a little arcane to me at first, but it was pretty solid after I got it set up. I used a couple of different client apps with it over that time, most of which are still available on the F-droid repo. It was fun, but got annoying when the server needed maintenance, or was down, or because of any of the other minor nuisances that come along with maintaining a service for others to use.
Eventually we all ended up just moving over to Signal because it was just as good from the view of cost-benefit and risk for us. We’re just trading stupid memes and Saturday night stories among ourselves. The most radical thing we might organize is a trip to Vegas for the week.
Definitely try it out, but consider that being a comms provider for others is always a bigger chore than it seems at the outset.
General use, when you can install software through your system’s package manager then that’s the preferred way to get software on your system. For the most part, those applications live under /usr
If for some reason you prefer to install the package manually, best practice is to install it outside /usr to avoid potential conflicts with existing system libraries. The /opt (“optional”) system is a common place to install these apps. Many modern install scripts already default to using /opt
It’s also convenient for backing up those apps.
What your situation for data backup? You mentioned a homelab and a NAS, are you running regular backups to an off-box store? You could mate it with a few TB of inexpensive USB disk, maybe some software RAID, and use it for off-box backups. Doesn’t have to be fast, just reliable.
Specs like that, you have some options. Virtual assistant, IPCam NVR like MotionEye or Frigate, media server for your car (takes DC voltage, right?), weather base station, ADS-B feeder, smart mirrors.
Or (if you’re in the US) you could repair it and then, if you donate it to a suitable charity, you could take the the cost of the repair as a deduction on your taxes. Probably doesn’t help you that much, but it could maybe really help someone else who needs it.
Or, just wipe it and send it to e-waste.
I ran an ejabberd node on an old x86 for years for family and some close friends. Works great.
Then I got tired of maintaining devices after long days at work doing IT things. We talked. Signal is easier. We moved over to that, in the end.
A Pi3 1GB will easily scale to 4 people.and beyond. XMPP is really lightweight for text and images. Consider a Pi4 for voice or video though.
Sends in the clear, no error checking, the nc
command is promiscuous while its bound to the port. No crypto or compression to slow you down. Just a raw pipe of bytes
Its a bad idea, part of the forbidden codex known only to old, irreverent graybeards who know better but don’t care anymore. There are better ways that are both more reliable and better practice.
You might want to look into using passwordless SSH keys within your script (see ssh -i
) which isn’t the most secure.practice on multiuser systems, but is Okayish in Devops and backups. Add other factors like aggressive allowed hosts settings on the receiver, and rotate the keys regularly.
Oh, I never said they weren’t absolute prats about invading user space with advertising their bullshit. The Lens fiasco, Snaps, the popup warnings in apt
breaking scripts, and the lack of UI toggles to easily disable those nag messages are all reasons I run other distros. There’s a big Mint colored button to turn on the Ubuntu experience without the nagging.
You have other choices that do no not shove that bullshit in your face. Canonical is gonna canonical. Nobody said you have to play their game.
My point was they are not withholding anything community-based from anyone. They are entitled to charge for their original work, even they are pushy about it. They even abide by the license and distribute it the changes when complete, but they’re not gonna just do it for giggles.
Not that I’m a fan of Ubuntu here (I generally don’t run it when I can run anything else), but I do want to say I think you’ve missed the point of the Pro tier.
Ubuntu releases two stable versions a year which are supported for 2 years or so. This is like a slow rolling distribution, and makes the newest software’s available. It receives regular security updates from upstream, from Canonical, and from backports, again for up to about 2 years. Most users install this version.
Ubuntu LTS editions are similar to the above, but receive all the same security updates for 5 years instead of 2. These distributions are generally targeted for Enterprise users who value stability over having the newest software, and for whom upgrading comes with significant time, expense and risk. The 5 year window is customary among other distros, and is largely supported by and throughout the Dev community.
Ubuntu LTS Pro editions extend the LTS support editions for an additional 5 years, meaning a Pro distro enjoys 10 years of security updates from upstream, backports, and from Canonical where needed. Canonical might even open source their fixes back into upstream for other maintainers and distros to use, depending on the situation. However, since Canonical is providing the work, they charge subscription fees to cover their costs for it from their target audience: Enterprises who can’t or REALLY don’t want to upgrade
Why an Enterprise might not want to upgrade has to do with risk and compliance. Corporate IT security is a different world, where every bit of software may need to be reviewed, assessed, tested and signed-off upon. Major software upgrades would need to be recertified to mitigate risk and ensure compliance, which takes significant time and expense to complete in good faith. Not having to do it every 2 or 5 years is money in the bank, especially when the environment doesn’t introduce new requirements very often.
Canonical is meeting a market demand with their Pro tier by allowing these customers to spend a fraction of their recertification costs on a software subscription. It’s overall good for the ecosystem because you have what amounts to corporate sponsors pumping money into keeping older packages maintained for longer. This let’s them keep using the same software distro all the rest of us can use for free.
I’m not shy about calling bullshit on ANY distro that operates in bad faith, and they all get into some BS from time to time. Nevertheless, Canonical are acting in good faith on this, and are merely collecting money for their time and skill to provide maintenance on FOSS packages that might otherwise go unmaintained.
tl;dr: Pro tier is for Enterprise customers who need extra-long term support and are willing to pay for it. Canonical is meeting a market demand so they can remain competitive for use in those environments, which is good for everyone. It’s benign. Keep the pitchforks sharp and the torches dry for another day.
edit: typos
Sure, you could set up any syslog receiver stack like Splunk (as the other OP suggested) or an ELK Stack or even just syslog-ng or rsyslog to disk. Anything that can ingest syslog format will handle Unifi logs.
Decide how you want to receive, store and parse your logstream data. Once you have a syslog receiver set up, set Unifi (System > Site > Enable Remote Logging) for the Syslog server remote address:port and start shipping logs.
Whatever you do with those logs is out of scope for this discussion, but your logger should at least ingest them and spool them.
No worries, the other poster was just wasn’t being helpful. And/or doesn’t understand statistics & databases, but I don’t care to speculate on that or to waste more of my time on them.
The setting above maxes out at 24h in stock builds, but can be extended beyond that if you are willing to recompile the FTL database with different parameters to allow for a deeper look back window for your query log. Even at that point, a second database setting farther down that page sets the max age of all query logs to 1y, so at best you’d get a running tally of up to a year. This would probably at the expense of performance for dashboard page loads since the number is probably computed at page load. The live DB call is intended for relatively short windows vs database lifetime.
If you want an all-time count, you’ll have to track it off box because FTL doesn’t provide an all-time metric, or deep enough data persistence. I was just offering up a methodology that could be an interesting and beneficial project for others with similar needs.
Hey, this was fun. See you around.