• 3 Posts
  • 301 Comments
Joined 1Y ago
cake
Cake day: Jul 07, 2023

help-circle
rss

Unraid or Truenas for open-ish thing.

Synology for paid products.



There’s a lot of variables at play here. The simplest thing you can do is compare your sound system profiles and configs and see how they differ. If the setups are similar, you could just take all the configs from PopOs and drop them into your Fedora install I guess.


Unraid and Truenas are pretty popular. Openmediavault is less popular, but a pretty simple system based on Debian.


Orchestration software is probably going to be the most useful in an actual disaster. Something for labor and task management. Mesh area network hub would be pretty useful as well. If you had Starlink, you’d be the hero of your neighborhood by providing Internet to everyone in range.

Here’s an interesting list to look through as well: https://github.com/DisasterTechCrew/awesome-disastertech


This is the answer. Otherwise, you’ll need to go into the DB and perform a migration of the objects to that new user, but the chances you’ll cover all the bases there and not cause issues will be slim.



Fix the code without forking the repo, then provide the diff if others want to apply it themselves.



Christ. If you had such a problem with it, just block the DNS or create a diff of a code fix. This is over the top lol


Going in a different direction here:

Buy a stable SSD with your budget to host your OS. Then call around to computer repair places, or E-Waste recycling joints, and ask if they have any old HDD drives laying around that can be recycled to use. Use these older HDDs to store your media and things that can be replaced. You may even get lucky enough to have a few larger HDDs where you can make a backup of your SATA HDD over USB every so often.


Could just skip the hosting step and use Joplin. Has a lot of backends built-in, and encryption support at the app so the contents are encrypted before they leave the device.


This is not true. Perhaps on an already at-risk or exploitable machine, but even then it’s not trivial, and this is not a widespread thing that happens everywhere all the time


Containers are isolated from the host by default. If you give a container a mount, it can only interact with the mount, but not the running host. If you further isolated and protected that mount, you would have been fine. Since you ran it as your unprivileged user, it’s one step safer from being able to hijack other parts of the machine, and if it was a “virus”, all it could do is write files to the mount and fill up your disk I guess, or drop a binary and hope you execute it.


🙄 read my comment in the context of what I was replying to, which is what the original posted was referring to in that maintenance updates ONLY. I clarified it pretty well, and that means no point releases, which is what that poster was referring to.


What motherboard are you running?

Also, are you sure your user has the right permissions to access libvirt assets? Do you get the same error if you run as sudo?


You have no idea what you’re asking for, that’s why everyone pointed you in one direction, only to have you bitch and complain we “didn’t read the post” and whine about it.


And I in return am asking to you STFU, do some reading, and come back when you’re better informed to properly ask for your FREE HELP and get answers.

These are people wasting their time on you right now. You’re being a demanding little prick. WE DONT NEED TO GIVE YOU SHIIIIIIIT, BRRAAAAAHHHHH


Yes, Ubuntu DOES only do security updates. They don’t phase major versions of point releases into distro release channels after they have been released. You have no idea what you are talking about in this thread. You need to go do some reading, please. People are trying to help you, and you’re just responding by being rude and snarky. The worst snark as well, because you think you are informed and right, and you’re just embarrassing yourself and annoying the people trying to help you.


That doesn’t even have anything to do with this. Phased upgrades are about CHANNELS. As in a select number of systems get the upgrades before anyone else. This is similar to a staging environment in that it minimizes risk. You clearly do not understand what you are asking for here, and are unable to articulate it well enough for us to understand either. I suggest you ask in a different way with more information.


You should be more courteous to the guy who has been responding to you, because he’s giving you exactly what you’re asking for, you just don’t know how to ask for it properly. Just a piece of advice 🤌

That being said, since you don’t know what you’re afraid of exactly, I can tell you in my long history of running thousands of Linux machines, containers and VMs at scale, I’ve never ever once since an unattended upgrade do anything that couldn’t immediately be rolled back or fixed. The worst I’ve seen is services impacted that do not start. So why don’t you just chill out a tiny a bit about your Jellyfin server or whatever you’re being rude about.



It’s called a staging environment. You have servers you apply changes to first before going to production.

I assume you mean this for home though, so take a small number of your machines and have them run unattended upgrades daily, and set whatever you’re worried about to only run them every few weeks or something.


Check your BIOS. I’m positive this is the issue.


Check your BIOS and make sure hardware virtualization is actually enabled.


Which driver is currently enabled? Radeon or amdgpu?

Also, in the context of what “Allocate 0” means, that IS your graphics card, if you only have one. Data types like arrays and lists start at 0 (ex: 0,1,2,3…)


Never heard of it, but it will stop working eventually not because of QT, but due to updated distros replacing aging or insecure libraries or compilation tools that will be incompatible with this software. You would have the option of keeping it running on an older LTS release of something, but eventually that will also run out. Best to just find an alternative that is actively developed at some point.


Yeah, that’ll work. Gparted should wipe the destination disk for you and set the boundaries and such. Should be super easy. You can find guides online as well.

Clonezilla is also a super easy route.


Just using DD like this is not doing a bit clone of your drive. What you want to do can be done with DD on a blank disk (no filesystem), but you might as well just use gparted and make it easy on yourself. Otherwise, you need make sure the source and destination disks have the exact same geometry and such…it’s just more steps you seem to not want to take. Just take the easy route.


The only solid reason I can think carry anything on a USB stick is if you’re going to be in an area without Internet. If you’re in an IT role where you’re interacting with end-user machines all the time, then the answer would obviously be some sort of live environment to troubleshoot or fix issues. In that case, load a Ventoy partition with a few different images, and and be done with it I guess.

If you’re thinking like a Prepper or whatever, keep a copy of Wikipedia, and some survival books maybe? Maps? That’s all I can think of. If you’re going this far, better carry a backpack with portable solar panels, a large battery, and a lifejacket. None of this matters when you don’t have food and water though, so…


If your drivers are UEFI, you can just use your BIOS boot menu to boot windows. You can fix the bootloaders if you want, but this is the answer to your base question.


Did you disable DHCP on your access points? This sure sounds like you may have plugged everything in and didn’t setup the network IP space and addressing properly. You should only have one DHCP provider on a network, and this seems like you have multiple running.


Digital output for IEC958 is going to be the one that works. Reboot if needed.


It’s certainly more cohesive if you’re doing a Samba<>Samba setup. Either combo will work though, you just need to make sure of the permissions for the share, and that your connected uid/gid is set properly for read/write.


I’m not aware of any distros catering to specific locales in their installers, but maybe that’s a thing.


Okay, so you need to match the uid/gid of your user on the client machine with whatever is on the host volume machine because it seems like your auth is not set right. You probably want a dedicated user. If you’re not sure what that means, just move on to the next bit.

On Windows machine: create new user, make sure ownership is set in permissions, log in with that user on the client machine. Then you won’t need sudo. You can Google to find more explanation, but that’s the gist.

If you need to sudo to create files, it means your Windows share isn’t allowing whatever authenticated user you have doesn’t have permissions to actually write on the Windows machine.


Okay, so on the device which is connecting to the share, from a cli, can you create files on the share mount? Don’t use your GUI if using one. Go to a prompt, and touch or copy a file in the mount location.


Just make sure you’re actually authenticating to the network share and not browsing an open/anonymous share. The user perms on the host of the volume need to match for read/write, or need to be publicly writable.


Yep. I’ve seen nothing of the sort in the wild. Still Ubuntu and RHEL/Centos/Rocky/AMZ2 in the DC almost exclusively. The only things I’ve seen making a few inroads for practical applications are CachyOS and Clear Linux.




Overall, probably a positive thing as the improvements made here will flow downstream. I'm actually looking forward to seeing the performance of these new Qualcomm chips in laptops.
fedilink

Is the AMD Phoenix power draw really just THAT bad right now?
Tldr; Have tested multiple different Ryzen 7000 configurations on various kernels, and the power draw just seems really bad. Been looking for a decent new laptop workstation that fits various tasks. Phoenix chips check a lot of the boxes that I want, but the power draw on Linux for these chips seems a bit...crazy. The product docs say these chips are 35W-45W, but I figured that was just the range of maximums. What I'm seeing on fresh installs of various Debian variants is a CONSTANT power draw of at least 35W on the low end at all times. I've stepped kernel point releases from 6.0 to 6.6 to test out, and the later versions are definitely better at using a bit less power thanks to the amd_pstate_epp being included directly in the kernel, but this power draw is still there for the CPU package on idle. A few different laptop models I've tested will only get 90 mins on battery because of this. I've now tried four different models from three different manufacturers, and all show the same type of power draw. Is this just a "thing" with these chips? I understand they were modified from desktop to be a more mobile platform, but this is just terrible from an end-user perspective. I want the CPU and iGPU, and hell, even the FPGA XDNA thingie, but not when the machine can't run off of AC.
fedilink