• 18 Posts
  • 583 Comments
Joined 8M ago
cake
Cake day: Jun 16, 2023

help-circle
rss

It actually kinda is. Is someone trying to sell you into a firewall?


Well what I can say is that since my team migrated everything to LXD/Incus the amount of tickets that are somehow related to the virtualization solution we used dropped really fast. Side note: we started messing around with LXD from Snap (but running under Debian) and moved to Debian 12 version as soon as it was made available.

About the kernel things, my upstream fix comment was about how Canonical / Ubuntu does things. They usually come up with some “clever” ideia to hack something, implement it and then the upstream actually solves the issue after proper evaluation and Ubuntu just takes it and replaces their quick hack. This happens quite frequently and it’s not always a few lines of code, for instance, it happened with the mess that shiftfs was and then the kernel guys come up with a real solution (idmapped) and now you see Canonical is going for it. Proxmox inherits the typical Canonical mess.


Multi-level wildcards don’t exist at all - either don’t use wildcards or use a certificate with multiple wildcard names. Eg. *.xyz.example.org + *.abc.example.org.


So you say it is “buggy as fuck” because there’s a bug that makes it so you can’t easily run it if your locate is different than English? 😂 Anyways you can create the bride yourself and get around that.

About the link, Proxmox kernel is based on Ubuntu, not Debian…


You funny guy 😂😂

Now, I’m on my phone so I can’t write that much but I’ll say that the post I liked to isn’t about potential issue, it goes over specific situations where it failed, ZFS, OVPN, etc. but I won’t obviously provide anyone with crash logs and kernel panics.

About ESXi: Incus provides you with a CLI and Web interface to create, manage, migrate VMs. It also provides basic clustering features. It isn’t as feature complete as ESXi but it gets the job done for most people who just want a couple of VMs. At the end of the day it is more inline with what Proxmox than what ESXi offers BUT it’s effectively free so it won’t hold important updates from users running on free licenses.

If you list what you really need in terms of features I can point you into documentation or give my opinion how how they compare and what to expect.



DO NOT migrate / upgrade anything to the snap package that package is from Canonical and it’s after the Incus fork, this means if you do for it you may never be able to then migrate to Incus and/or you’ll become hostage of Canonical.

About the rest, if you don’t want to add repositories you should migrate into LXD LTS from Debian 12 repositories. That version is and will be compatible with Incus and both the Incus and Debian teams have said that multiple times and are working on a migration path. For instance the LXD from Debian will still be able to access the Incus image server while the Canonical one won’t.


“Big boy domains” on a network aren’t very easy to deal with. For instance sometimes you’ve devices in your network running DNS queries for your devices and they end up leaking to the outside because well… they’re FQDN… I also have experience mDNS issues due to some reason it seems to slow down a lot once you’re not using .local as your domain as well.


How are you dealing with mDNS and your custom domain? Isn’t it causing… issues and mismatches?


LXD/Incus provides a management and automation layer that really makes things work smoothly essentially replacing Proxmox. With Incus you can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes) and those are just a few things you can do with it and not with pure KVM/libvirt. Also has a WebUI for those interested.

A big advantage of LXD is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

Incus isn’t about replacing existing virtualization techniques such as QEMU, KVM and libvirt, it is about augmenting them so they become easier to manage at scale and overall more efficient. It plays on the land of, let’s say, Proxmox and I can guarantee you that most people running it today will eventually move to Incus and never look back. It woks way better, true open-source, no bugs, no holding back critical fixes for paying users and way less overhead.


Re incus: I don’t know for sure yet. I have an old LXD setup at work that I’d like to migrate to something else, but I figured that since both libvirt and proxmox support management of LXC containers, I might as well consolidate and use one of these instead.

Maybe you should consider consolidating into Incus. You’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potential issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?


Does someone know a tool that creates a Certificate Authority and signs certificates with that CA? (…) just a tool that spits out the certificates and I manage them that way, instead of a whole service for managing certs.

Yes, written in go, very small and portable: https://github.com/FiloSottile/mkcert.

Just be aware of the risks involved with running your own CA.

You’re adding a root certificate to your systems that will effectively accept any certificate issued with your CA’s key. If your PK gets stolen somehow and you don’t notice it, someone might be issuing certificates that are valid for those machines. Also real CA’s also have ways to revoke certificates that are checked by browsers (OCSP and CRLs), they may employ other techniques such as cross signing and chains of trust. All those make it so a compromised certificate is revoked and not trusted by anyone after the fact.

Why not Let’s Encrypt?

that’s fair but if your only concern is about “I do not want any public CA to know the domains and subdomains I use” you get around that.

Let’s Encrypt now allows for wildcard so you can probably do something like *.network.example.org and have an SSL certificate that will cover any subdomain under network.example.org (eg. host1.network.example.org). Or even better, get a wildcard like *.example.org and you’ll be done for everything.

I’m just suggesting this alternative because it would make your life way easier and potentially more secure without actually revealing internal subdomains to the CA.

Another option is to just issue certificates without a CA and accept them one at the time on each device. This won’t expose you to a possibly stolen CA PK and you’ll get notified if previously the accepted certificate of some host changes.

openssl req -x509 -nodes -newkey rsa:2048 \
-subj "/CN=$DOMAIN_BASE/O=$ORG_NAME/OU=$ORG_UNIT_NAME/C=$COUNTRY" \
-keyout $DOMAIN_BASE.key -out $DOMAIN_BASE.crt -days $OPT_days "${ALT_NAMES[@]}"


So while I agree with some or the majority of your commentary I would like to add a bit of context.

Well, the question is why are you excluding web banking? (…) If you’re allergic to webapps for some reason

I’m not allergic, I just happen to live in a country where banks unfortunately force you get their mobile app for certain operations / you can’t do everything on their web app because of “security” . There’s a big thing in Europe around secure transaction authorizations that require a secure 2FA methods (not SMS) and banks here decided to implement that in way that their mobile apps kinda work as a 2FA to the web version. Heck I can’t even generate a virtual credit card here without installing an app. Compatibility layers / emulation, such as Waydroid, even GrapheneOS is flagged by most of the banking apps here as well and they don’t allow you to proceed.

Why are you using contactless payment? Unsatisfied with the amount of data your bank collects

If I’m using the app from the banking alliance they won’t gather more info than what they already do whenever I swipe a debit or credit card on a payment terminal. I kinda becomes about convenience at that point. Obviously the same can’t be said for Apple Pay / Google Wallet and I avoid them.

Govt provides electronic versions of your identity card (…) Either way, this isn’t something you “need”, as carrying your documents around really isn’t a problem…

Actually that’s something I need, let me tell you why: I’m required to digitally sign a LOT of documents everyday and here you’ve two ways to do that. The classic one is by having a smart card reader in your computer, open a desktop app, choose a file and place the identity or professional card into the reader and type a PIN code. The second way is to open the application and click “sign with your phone”, this will prompt you to open the govt phone app and enter a PIN / biometric authentication there and the document will get signed as well. While the first option works fine it’s just annoying to have to carry a card reader around to meetings and other places and it also takes way more time for the desktop app to respond and sign the document if you se the identity card.

…first you buy an IoT device that connects to “the cloud”, then you say you need proprietary software to access it. Of course you do, that’s the kind of device you bought - the vast majority of IoT devices are made with zero regard to the user’s privacy and security, to hackability or right to repair. (…) That said, it’s very easy to find hackable devices if you do the bare minimum research

You proceed to give examples of vacuum cleaners and other stuff that is indeed easy to find more open.

I’m all for open-source IoT, I like it as an hobby and I run HomeAssistant and most of my IoT is DIY ESP32-S2 devices with sensors and relays. I also have some cheap relays and plugs from Aliexpress that are BL2028N and I managed to flash with ESPHome / Libretiny however things become a LOT harder when it comes to CCTV.

Cameras in general aren’t as easy as cheap plugs to deal with. There’s the OpenIPC project but it seems only to support very specific chips that are sometimes older, hard to find or not price/feature competitive with TP-Link offers.

For what’s worth TP-Link Tapo cameras (TC70, 71 etc.) aren’t that bad when it comes to privacy, there isn’t much “cloud”. They do require you to use their mobile app to setup the camera but afterwards you can just run them on an isolated VLAN / firewall them from the internet completely and you’ll still be able to use all of the camera’s features. Those cameras provide a generic rtsp stream that even VLC can play and there’s also a good HA integration that provides all features of the TP-Link Tapo application like pan / move / download recordings from the camera’s SD card and whatnot 100% locally / offline.

but don’t pretend there’s some insurmountable barrier preventing anyone from using it

No, but it would make my life considerably worse or at least impractical in some cases.


The rain why I need nativa banking apps is because there are some features that are only available through the app and not with web banking. Another thing about those support layers is that banking apps usually know how to detect rooted devices and stuff like that and won’t work.

That’s unfortunate but it is what it is.


Yes, even GrapheneOS or Calyx will provide a much better experience.


If your banking app is proprietary

Are you drunk, what bank doesn’t have a proprietary application? lol


Oh yeah, my bank will definitely support Linux phones lol


You can always do what a lot of people are doing, use Debian as your base OS and install all software via Flatpak, solid OS with the latest software. Doesn’t get any faster :P


So… why don’t you transition to Debian and use it for the next 20 years? :)


None, because a phone is useless without applications.

Edit: I’m all for a truly open-source phone with no tracking but at some point things must be useful as well and applications from the Play Store or App Store are something people have to get and use everyday. For instance in my country, if you exclude browser-based banking no bank will work those Linux phones and the NFC / contactless payment system here requires either Apple Pay, Google Wallet or a proprietary app develop by a banking alliance. Govt provides electronic versions of your identity card, driving license and a ton of other cards related to the govt that also require an Android/iOS app they make… Even something simple like setting up a TP-Link Tapo wireless security camera will require an app these days.




I had a lot of issues when installing Debian after some days, because of a non-optimal suggested partitioning layout, misconfigured mirror-server list or network for example.

For what’s worth I never had those kinds of issues with the Debian installer, to me it seems that anyone capable of installing Windows 10/11 is capable of installing Debian on the bases of “next > next > next” everything as defaults and will get to a working desktop.

I’ve seen a few people complaining about the Debian installer but I never had issues at all. From servers to laptops always seems to get things right for me.


TL;DR: If you just care about having something that works reliably then install Debian + GNOME + Software as Flatpaks. You’ll get a rock solid system with the latest software.

About the desktop environment: the “what you go for it’s entirely your choice” mantra when it comes to DE is total BS. What happens is that you’ll find out while you can use any DE in fact GNOME will provide a better experience because most applications on Linux are design / depend on its components. Using KDE or XFCE is fun until you run into some GTK/libadwaita application and small issues start to pop here and there, windows that don’t pick on your theme or you just created a Frankenstein of a system composed of KDE + a bunch of GTK components.


Oh, now I remembered that there’s ActivationPolicy= on [Link] that can be used to control what happens to the interface. At some point I even reported a bug on that feature and vlans.

I thought it had something to do with the interface having an IP (…) LinkLocalAddressing=ipv4

I’m not so sure it is about the interface having an IP… I believe your current LinkLocalAddressing=ipv4 is forcing the interface to get up since it has to assign a local IP. Maybe you can set LinkLocalAddressing=no and ActivationPolicy=always-up and see how it goes.


Am I mistaken that the host shouldn’t be configured on the WAN interface? Can I solve this by passing the pci device to the VM, and what’s the best practice here?

Passing the PCI network card / device to the VM would make things more secure as the host won’t be configured / touching the network card exposed to the WAN. Nevertheless passing the card to the VM would make things less flexible and it isn’t required.

I think there’s something wrong with your setup. One of my machines has a br0 and a setup like yours. 10-enp5s0.network is the physical “WAN” interface:

root@host10:/etc/systemd/network# cat 10-enp5s0.network
[Match]
Name=enp5s0

[Network]
Bridge=br0 # -> note that we're just saying that enp5s0 belongs to the bridge, no IPs are assigned here.
root@host10:/etc/systemd/network# cat 11-br0.netdev
[NetDev]
Name=br0
Kind=bridge
root@host10:/etc/systemd/network# cat 11-br0.network
[Match]
Name=br0

[Network]
DHCP=ipv4 # -> In my case I'm also requesting an IP for my host but this isn't required. If I set it to "no" it will also work.

Now, I have a profile for “bridged” containers:

root@host10:/etc/systemd/network# lxc profile show bridged
config:
 (...)
description: Bridged Networking Profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
(...)

And one of my VMs with this profile:

root@host10:/etc/systemd/network# lxc config show havm
architecture: x86_64
config:
  image.description: HAVM
  image.os: Debian
(...)
profiles:
- bridged
(...)

Inside the VM the network is configured like this:

root@havm:~# cat /etc/systemd/network/10-eth0.network
[Match]
Name=eth0

[Link]
RequiredForOnline=yes

[Network]
DHCP=ipv4

Can you check if your config is done like this? If so it should work.


there are other word processors that are at least as good

Their only problem is that this isn’t true. :P LibreOffice and friends might work for quick jobs in isolation and whatnot but once you’ve to collaborate with others and use advanced features like macros it’s game over.

For what’s worth LibreOffice can’t even keep the default spacing on a bullet list consistent with what MS Word does and this is an issue if you share a document in works with someone else and then things appear in different places / pages.


Debian 12 has had at least two system breaking bugs in the last month or two,

What are you talking about specifically? I do manage dozens of Debian 12 servers and run it in one of my desktop machines since the release I didn’t run into any issues so far, stable as usual but I would be interested in knowing about those.

I honestly don’t understand the love for Debian either.

Because, like Ubuntu, it’s truly community driven, not subject to the whims of some corporation and more stable than the others.

Also recommending GNOME to anyone used to Windows is just going to frustrate them if they’re already hesitant.

While I get your point and I like XFCE very much, the “what you go for it’s entirely your choice” mantra when it comes to DE is total BS. What happens is that you’ll find out that while you can use any DE in fact GNOME will provide a better experience because most applications on Linux are design / depend on its components. Using KDE/XFCE is fun until you run into some GTK/libadwaita application and small issues start to pop here and there, windows that don’t pick on your theme or you just created a frankenstein of a system composed by KDE + a bunch of GTK components;


Because the user getting a hundred popups on app start for various files the app needs isn’t exactly a usable experience

It doesn’t but until apps can declare on a simple config file what paths they require that’s the way things should work. I guess that would motivate the developers who are packing into Flatpaks to properly list whatever files the application requires. If they don’t, then the application will still work fine but be a bit annoying.

Also, blocking the app’s main thread (which is the only way you could do this) is likely to break it and cause tons of user complaints too. Aside from apps using the APIs meant for the purpose of permission systems, there’s no good way to make it work.

Yet, macOS does and things don’t go that bad, on the example how do you think they do it for command line tools? The system intercepts the request, show the popup and wait for the user input. I’ve seen the same happening with older macOS applications that aren’t aware it could happen and yes, the main thread is blocked and the application seems to crash.

I thinks it’s way better doing it this way and still have a somewhat productive container and isolation experience than just bluntly blocking everything - something that also breaks apps sometimes.


😂 😂 😂

What counter arguments? If one doesn’t understand how things like the Docker Hub, VSCode and the over-reliance on proprietary repositories is an issue I can’t say much more. Immutable distros are just yet another door for that type of bullshit and it’s a pretty obvious one.


Hints? Don’t use Docker for your own sake. Why would you, you’re already running LXC containers, just setup whatever you need inside those and you’re good to go with way less overhead and bloat.

While you’re at that, did you know that the creators of LXC have a solution called LXD/Incus that is way better at managing LXD containers and can also create and VMs? For what’s worth is a 100% free and open-source solution that can be installed on any clean Debian 12 setup from their repository and doesn’t require 1000 different daemons like Proxmox does nor does it constantly nags for a license. :)


While what you say is true, the “portals” were an afterthought, an imposition to developers and a cumbersome and poorly documented solution. Just like the theming and most other things.

Instead of bluntly blocking things why can’t Flatpak just simulate a full environment and just prompt the user whenever some application wants to read/write to file / unix socket at some path? A GUI capable of automatically enumerating those resources and a bunch of checkboxes like "app X and Y both have access to socket at /var/run/socketY would also solve most of the issues.


It’s a bit hard to search info about it with the name. But it’s a fantastic project

Searching for LXD usually returns more useful information… Incus is just a fork as you know.


Great post, lots of detailed information for new users. Now I’m gonna tell everyone what you conveniently omitted about what’s driving immutable distros and what your “future” section should’ve looked like.

Immutable distros solve the same problem that was solved years ago with a twist: they’re are all about making thing that were easy into complex, “locked down”, “inflexible”, bullshit to justify jobs and payed tech stacks and a soon to be released property orchestration and/or repository solution.

We had Ansible, containers, ZFS and BTRFS that provided all the required immutability needed already but someone decided that is is time to transform proven development techniques in the hopes of eventually selling some orchestration and/or other proprietary repository / platform in the likes of Docker / Kubernetes. Docker isn’t totally proprietary and there’s Podman but it doesn’t really matter because in the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term.

“Oh but there are truly open-source immutable distros” … true, but again this hype is much like Docker and it will invariably and inevitably lead people down a path that will then require some proprietary solution or dependency somewhere (DockerHub) that is only required because the “new” technology itself alone doesn’t deliver as others did in the past.

People now popularizing immutable distributions clearly haven’t had any experience with it before the current hype. Immutable systems aren’t a new thing we’ve been using them since the raise of MIPS devices (mostly routers and IOTs) and we’ve have been moving to ARM and mutable solutions because they’re objectively better, easier to manage and more reliable.

The RedHat/CentOS fiasco was another great example of these ecosystems and once again all those people who got burned instead of moving to a true open-source distribution such as Debian decided to pick Ubuntu - it’s just a matter of time until Canonical decides to do some move.

Nowadays, without Internet and the ecosystems people can’t even do shit anymore and the current state of things when it comes to embedded development is a great example of this. In the past people were able to program AVR / PIC / Arduino boards offline and today everyone depends on the PlatformIO + VSCode ecosystem to code and deploy to the devices. VSCode is “open-source” until you realize that 1) the language plugins that you require can only compiled and run in official builds of VSCode and 2) Microsoft took over a lot of the popular 3rd party language plugins, repackage them with a different license… making it so if you try to create a fork of VSCode you can’t have any support for any programming language because it won’t be an official VSCode build. MS be like :).

All those things that make development very easy and lowered the bar for newcomers have the dark side of being designed to reconfigure and envelope the way development gets done so someone can profit from it. That is sad and above all set dangerous precedents and creates generations of engineers and developers that don’t have truly open tools like we did.

This is all about commoditizing development - it’s a negative feedback loop that never ends. Yes, I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something. At the end of the they the business of those cloud companies is transforming developer knowledge into products/services that companies can buy with a click.


Here’s a revised flowchart for you:

  • You need professional software like MS Word, Autodesk, Adobe, NI Circuit Design for collaboration with others > Stick with windows;
  • Any other case > Install Debian + GNOME + Software as Flatpaks. You’ll get a rock solid system with the latest software;

Done.


Very new? The thing has been around since 2018. Anyways I know for a fact that a few cloud providers and some enterprise types are using it to power their infrastructure.


XCP-ng has an official terraform provider, whilst ESXi and Proxmox don’t. The unfortunate part is that there isn’t even a provider for KVM, which really sucks.

Use LXD/Incus instead, there’s a provider for it.


Well Proxmox doesn’t have it… however LXD/Incus has one. Maybe you should try it as a replacement for Proxmox? I mean it’s new, new generation software, can be installed in a clean Debian 12 setup from the repositories and does both containers and VMs.


but I suspect something will come along to address these issues and snatch the market away from Flatpak.

I believe it could only be fixed by a team from GNOME or KDE, they’re the one in a position to develop something like Flatpak but deeply integrated with the system instead of trying to get around it.

For what’s worth Apple did a very good job when it came to the isolation and containerization of desktop applications, but again only possible because they control both sides.

Apple enforces a LOT of isolaton, they call it sandboxed apps and it is all based on capabilities, you may enjoy reading this. Applications get their isolated space at ~/Library/Containers and are not allowed to just write to any file system path they want.

A sandboxed app may even think it is writing into a system folder for preference storage for example - but the system rewrites the path so that it ends up in the Container folder instead. For example under macOS apps typically write their data to ~/Library/Application Support. A sandboxed app cannot do that - and the data is instead written beneath the ~/Library/Containers/app-id path for that app.

And here’s how good Apple is, any application, including 3rd party tools running inside Terminal will be restricted:

I bet most people weren’t expecting that a simple ls would trigger the sandbox restrictions applied to the Terminal application. The best part is that instead of doing what Flatpak does (just blocking things and leaving the user unable to to anything) the system will prompt you for a decision.

I believe this was the best way to go about things but it would require to get a DE team to make it in a cohesive and deeply integrated with the system. Canonical could do it… but we all know how Canonical is.


Alternative to Home Assistant for ESPHome Devices
Hello, My IoT/Home Automation needs are centered around custom built ESPHome devices and I currently have them all connected to a HA instance and things work fine. Now, I like HA's interface and all the sugar candy, however I don't like the massive amounts of resources it requires and the fact that the storage usage keeps growing and it is essentially a huge, albeit successful, docker clusterfuck. Is there any alternative dashboard that just does this: 1. Specifically made for ESPHome devices - no other devices required; 2. Single daemon or something PHP/Python/Node that you can setup manually with a few systemd units; 3. Connects to the ESPHome devices, logs the data and shows a dashboard with it; 4. Runs offline, doesn't go into 24234 GitHub repositories all the time and whatnot. Obviously that I'm expecting more manual configuration, I'm okay with having to edit a config file somewhere to add a device, change the dashboard layout etc. I also don't need the ESPHome part that builds and deploys configurations to devices as I can do that locally on my computer. Thank you.
fedilink

cross-posted from: https://lemmy.world/post/11162262 > Hey, > > For all of you that are running proper setups and use nftables to protect your servers be aware that `pvxe/nftables-geoip` now has the ability to generate IP lists by country. > > This can be used to, for instance, drop all traffic from specific countries or the opposite, drop everything except for your own country. > > https://github.com/pvxe/nftables-geoip/commit/c137151ebc05f4562c56e6802761e0a93ed107a2 > > Here's how you can block / track traffic from certain countries: > > - https://wiki.nftables.org/wiki-nftables/index.php/GeoIP_matching > - https://www.mybluelinux.com/nftables-and-geoip > > Previously you had to load the entire geoip DB containing multiple GB and would end up using a LOT of RAM. Those guides aren't yet updated to use the country specific files but it's just about changing the `include` line to whatever you've generated with `pvxe/nftables-geoip`.
fedilink

Hey, For all of you that are running proper setups and use nftables to protect your servers be aware that `pvxe/nftables-geoip` now has the ability to generate IP lists by country. This can be used to, for instance, drop all traffic from specific countries or the opposite, drop everything except for your own country. https://github.com/pvxe/nftables-geoip/commit/c137151ebc05f4562c56e6802761e0a93ed107a2 Here's how you can block / track traffic from certain countries: - https://wiki.nftables.org/wiki-nftables/index.php/GeoIP_matching - https://www.mybluelinux.com/nftables-and-geoip Previously you had to load the entire geoip DB containing multiple GB and would end up using a LOT of RAM. Those guides aren't yet updated to use the country specific files but it's just about changing the `include` line to whatever you've generated with `pvxe/nftables-geoip`.
fedilink

I'm looking for an application (windows or maybe web) that can be used to combine images vertically and horizontally. I usually go with PhotoScape (screenshot) to for this but that's not free nor updated anymore. Important features for me are to be able to combine horizontally or vertically, set the number or rows or columns and have the ability to resize the final image. Thank you.
fedilink

The Banana Pi BPI-M7 single board computer is equipped with up to 32GB RAM and 128GB eMMC flash, and features an M.2 2280 socket for one NVMe SSD, three display interfaces (HDMI, USB-C, MIPI DSI), two camera connectors, dual 2.5GbE, WiFi 6 and Bluetooth 5.2, a few USB ports, and a 40-pin GPIO header for expansion.
fedilink


Deleted Posts
I've notice that posts in this community tend to get deleted, even ones with multiple comments and/or useful information. Even worse is when they get posted again by some other user a few days later. What's going on? What's the policy around here?
fedilink

cross-posted from: https://lemmy.world/post/7123708 > In this article, you will discover the ISO images that Debian offers and learn where and how to download them. I’ll also provide some useful tips on how to use Jigdo to archive the complete Debian repository into ISO images.
fedilink

Debian 12.1 (6.1.0-11-amd64) running LXD/LXC and on an unprivileged container setting `security.idmap.isolated=true` seems to fail to update the owner/group of the container's files. Here is an example: ```` # lxc launch images:debian/12 debian (...) # lxc config get debian volatile.idmap.base 296608 # lxc stop debian Error: The instance is already stopped # lxc config set debian security.idmap.isolated true # lxc config get debian security.idmap.isolated true # lxc start debian ```` Now if I list the files on the container volume I'll get they're all owned by the host `root ` user: ```` # ls -la /mnt/NVME1/lxd/containers/debian/rootfs/ total 24 drwxr-xr-x 1 root root 154 Sep 5 06:28 . d--x------ 1 296608 root 78 Sep 5 15:59 .. lrwxrwxrwx 1 root root 7 Sep 5 06:25 bin -> usr/bin drwxr-xr-x 1 root root 0 Jul 14 17:00 boot drwxr-xr-x 1 root root 0 Sep 5 06:28 dev drwxr-xr-x 1 root root 1570 Sep 5 06:28 etc ```` I tried multiple versions of LXD/LXC. This happens with both 5.0.2 from `apt` as well with 4.0 and 5.17 (latest) from `snap`. Interestingly enough I have another Debian 10 (4.19.0-25-amd64) running and older LXD 4 from `snap` and on that one things work as expected: ``` # ls -la /mnt/NVME1/lxd/containers/debian/rootfs/ total 0 drwxr-xr-x 1 1065536 1065536 138 Oct 29 2020 . d--x------ 1 1065536 root 78 Oct 14 2020 .. drwxr-xr-x 1 1065536 1065536 1328 Jul 24 19:07 bin drwxr-xr-x 1 1065536 1065536 0 Sep 19 2020 boot drwxr-xr-x 1 1065536 1065536 0 Oct 14 2020 dev drwxr-xr-x 1 1065536 1065536 1716 Jul 24 19:08 etc ``` As you can see on this systems all the files are owned by `1065536:1065536`. --------------- **Update:** I tried to probe around the maps with `lxc config show debian` in both machines and I saw this: **Machine running Debian 10:** ```` security.idmap.isolated: "true" (...) volatile.idmap.base: "1065536" volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1065536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1065536,"Nsid":0,"Maprange":65536}]' volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1065536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1065536,"Nsid":0,"Maprange":65536}]' volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1065536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1065536,"Nsid":0,"Maprange":65536}]' ```` **Machine running Debian 12:** ```` security.idmap.isolated: "true" (...) volatile.idmap.base: "231072" volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":231072,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":231072,"Nsid":0,"Maprange":65536}]' volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":231072,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":231072,"Nsid":0,"Maprange":65536}]' volatile.last_state.idmap: '[]' ```` Why didn't it populate `volatile.last_state.idmap: '[]'`? How can I fix it? Thank you.
fedilink

Hello, There's this website https://weather.ambient-mixer.com/the-perfect-storm that has a nice mixer of background sounds / ambient music. I would like to know if it's possible to somehow possible to rip the player and all the music it allows on the channel mixers to use offline. The same question also applies to those: https://mynoise.net/NoiseMachines/rainNoiseGenerator.php https://mynoise.net/NoiseMachines/thunderNoiseGenerator.php https://mynoise.net/NoiseMachines/fireNoiseGenerator.php Thank you.
fedilink

Hello, I've been using Armbian on a bunch of ARM SCBs and they have a very nice MOTD on SSH login that shows CPU, RAM, Storage and networking infromation. Is there anything similar for a regular x86 machine? I tried to grab the scripts from a NanoPi M4v2 board but had to change a ton of stuff to get it working on x86 and it isn't portable as AMD and Intel report temps differently. Or... does anyone know if their x86 version has it working and where to get? Just for reference I'm talking about this: https://cdn.tcb13.com/2023/armbian-motd.jpg Thank you.
fedilink

Linux Performance Tools
"This page links to various Linux performance material I've created, including the tools maps on the right. These use a large font size to suit slide decks. You can also print them out for your office wall. They show: Linux observability tools, Linux static performance analysis tools, Linux benchmarking tools, Linux tuning tools, and Linux sar. Check the year on the image (bottom right) to see how recent it is."
fedilink

After a few conversations with people on Lemmy and other places it became clear to me that most aren't aware of what it can do and how much more robust it is compared to the usual "jankiness" we're used to. In this article I highlight less known features and give out a few practice examples on how to leverage Systemd to remove tons of redundant packages and processes. **And yes, Systemd does containers.** :)
fedilink

After a few conversations with people on Lemmy and other places it became clear to me that most aren't aware of what it can do and how much more robust it is compared to the usual "jankiness" we're used to. In this article I highlight less known features and give out a few practice examples on how to leverage Systemd to remove tons of redundant packages and processes. **And yes, Systemd does containers.** :)
fedilink

Hello, I'm looking for a unit converter written in JS / client-side only that I can self-host / add to a bunch of tools I already use. I was looking for a suggestion to get something similar to the good old https://joshmadison.com/convert-for-windows/ but that runs a browser. Thank you for your suggestions.
fedilink

Debian 12: How to setup disk encryption with TPM2
Hello, I've an **HP EliteBook 840 G5** that I've been using up until now with Windows 10. I want to replace it with **Debian 12** however since this is a laptop I would like to have my disk fully encrypted as well as the boot stage (initramfs etc). **My threat model**: make sure if someone stoles the laptop, powered off, they won't be able to access my data. I would also like to avoid evil maid attacks and make sure I'm not booting into some modified kernel / system with spyware or that will leak my TPM keys. I've found some information online but I'm unsure of how secure those setups are and/or if it isn't even possible to have the same level of security that Windows provides. Here are a few of my questions: - Anyone around here that has a similar HP laptop and did this? - What about enrolling secure boot keys on the UEFI? From what I read simply using the typical Linux shim makes things more secure but it doesn't fix the problem. Enrolling keys seems to break some motherboards - Even if I use `--tpm2-pcrs=1,4,5,7,9` how secure is that, should I add more? - What is the impact of this in system upgrades? How do I deal with those? - If I want to proceed with this what I should know / what typically fails or can be problematic / security issue? Some of the information I found: - https://0pointer.net/blog/unlocking-luks2-volumes-with-tpm2-fido2-pkcs11-security-hardware-on-systemd-248.html - https://saligrama.io/blog/post/upgrading-personal-security-evil-maid/ - https://fedoramagazine.org/automatically-decrypt-your-disk-using-tpm2/ Thank you.
fedilink


Cryptomator: A Warning About Data Loss
Hello, I'm just posting this as a warning to anyone using Cryptomator for serious stuff. I've been using it in not-very-critical stuff for some years now and the reality is that I've had data loss on multiple occasions under Windows. I had two major incidents: - After creating a vault in Google Drive (via Cyberduck) it worked fine for some time but eventually the vault was empty; - Long file names seem to f*k something and the files simply vanish after opening the vault a few times. If you google "cryptomator data loss" there are a LOT of complaints and frankly I'll ditch it now.
fedilink