Mine looks like this:
UUID=blah /media/games ntfs3 uid=1000,gid=1000,umask=000,rw,user,exec,nofail,nocase,windows_names 0 0
If you’re copy-pasting this, make sure your uid and gid matches of course.
But the key thing for Steam is you need to have your compatdata
folder on a Linux partition, because Proton creates folders with invalid characters (like :
). windows_names
would prevent that of course, and thus prevents corruption, but it would cause Proton to fail since if can’t create those folders/files. So you’ll need to symlink that folder on your NTFS disk to point to a folder on a Linux partition.
Eg:
$ mkdir -p ~/.steam/steam/steamapps/compatdata
$ ln -s ~/.steam/steam/steamapps/compatdata /media/games/Steam/steamapps/
Of course, before you run the above, you’ll need to delete the existing compatdata
folder from the NTFS disk.
I’d like to see a simple, dependency-free, calculator app, written in Rust, using egui. All other GUI calculator apps I’ve seen so far are unnecessarily heavy, using bloated toolkits like GTK or Qt.
This would be handy for those run a GTK/Qt-free environment, and/or those who just want a tiny calculator app (optimised for the smallest binary size) without any external dependencies. Preferably even compiled using musl, to remove any glibc dependencies - resulting in a simple, small, portable binary that can run on any distro and doesn’t even need to be installed.
Eventually, I would like to see this idea expanded to other apps - such as a simple text editor, a simple image editor, and maybe even a simple and lightweight web browser using Servo.
ntfs3
has had several improvements in 6.2 and 6.8, and it’s been pretty stable for me of late. I use it to share/backup my Steam game library mainly + for my portable drives for general data storage/local backups, and haven’t had any issues.
It’s not orphaned. There was a bit of lull after it was introduced in kernel 5.15, and yes it was a bit unstable in the 5.x series, but it’s been pretty good since 6.2 where they finally introduced the nocase
and windows_names
mount options. The performance improvements are worth it if you use NTFS heavily, so I would personally recommend switching.
It refers to modern Intel CPUs where there are two types of cores - performance cores (P-cores) and efficient cores (E-cores). This is similar to ARM’s big.LITTLE architecture which we’ve seen in smartphones for many years already.
See: https://www.intel.com/content/www/us/en/gaming/resources/how-hybrid-design-works.html
If it’s just Crunchyroll doing this, you can disable auto-play for it (or just disable it for all sites, IMO automatic playback of media is pretty annoying).
Another alternative is to use Auto Tab Discard, which automatically suspends tabs which are inactive after x seconds. This also helps save memory and CPU usage, and also greatly benefits laptop users. So if you tend to leave your browser open and have dozens of tabs in the background, I’d highly recommend getting this.
Indeed. But I think some confusion will still remain as long as the ntfs-3g FUSE driver is still included by distros. Because right now, you have to explicitly specify the filesystem type as ntfs3
if you want to use the new in-kernel driver, otherwise it would use ntfs-3g
. And most guides on the web still haven’t been updated to use ntfs3
in the fstab, so I’m afraid this confusion will continue to persist for some time.
Here’s the TL;DR from Phoronix:
#AMD
AMD P-State Preferred Core handling for modern Ryzen systems. This is for leveraging ACPI CPPC data between CPU cores for improving task placement on AMD Ryzen systems for cores that can achieve higher frequencies and also helping in hybrid selection between say Zen 4 and Zen 4C cores. This AMD Preferred Core support has been in development since last year.
Performance gains on AMD 4th Gen EPYC
AMD FRU Memory Poison Manager merged along with other work as part of better supporting the AMD MI300 series.
AMD has continued upstreaming more RDNA3+ refresh and RDNA4 graphics hardware support into the AMDGPU driver.
#Intel
Intel Xeon Max gains in some AI workloads
Intel FRED was merged for Flexible Return and Event Delivery with future Intel CPUs to overhaul CPU ring transitions.
Reworked x86 topology code for better handling Intel Core hybrid CPUs.
Intel Fastboot support is now enabled across all supported graphics generations.
Intel Core Ultra “Meteor Lake” tuning that can yield nice performance improvements for those using new Intel laptops.
Continued work on the experimental Intel Xe DRM kernel graphics driver that Intel is aiming to get ready in time for Xe2 / Lunar Lake.
Support for larger frame-buffer console fonts with modern 4K+ displays.
Dropping the old NTFS driver.
Improved case-insensitive file/folder handling.
Performance optimizations for Btrfs.
More efficient discard and improved journal pipelining for Bcachefs.
FUSE passthrough mode finally made it to the mainline kernel.
More online repair improvements for XFS.
Much faster exFAT performance when engaging the “dirsync” mount option.
Full summary here: https://www.phoronix.com/review/linux-69-features/
What are corporate users using?
Corporates are using ThinkPads, HP EliteBooks and MacBooks. OS being Windows, macOS mainly. Linux on workstations is pretty rare - mainly used by developers - and the distro being Ububtu LTS (which I do not recommend).
Since you want to use Linux, go for a ThinkPad. There are more Linux-friendly laptops of course (like Framework and System76) but I believe none of them offer corporate-levels of stability and build-quality like ThinkPads yet - as you have experienced yourself with System76.
Main pro-tip is to avoid systems that use nVidia cards - they’re often responsible for buggy suspend/resume in Linux, and can break your OS sometimes when you do an OS/kernel upgrade. So if you’re after stability, avoid nVidia like the plague.
For the docking station, I’ve had good experiences with the HP Thunderbolt Dock G4. The initial releases were in fact a bit buggy with suspend/resume, but HP have released subsequent firmware updates to fix those issues. In fact, HP have been really good at providing regular firmware updates for those docks, and the best part is that it’s on the LVFS too - which means the firmware can be updated directly in Linux using fwupd
. A lot of vendors don’t bother updating docking stations - and even fewer update them via LVFS, so this is something you might definitely want to look into.
Finally, for distro recommendations, I would recommend a Fedora Atomic distro since they’re immutable, and rollbacks are as easy as just selecting the previous image in the boot menu. Given your requirements, I’d recommed Bluefin - specifically the Developer Experience version, since it comes with virtualisation tools OOTB.
For reference, I mainly use Bazzite (another variant of Fedora Atomic) on my pure-AMD ThinkPad Z13, and haven’t had any issues with suspend/resume, external monitors, or virtualisation dev/test workflows. There’s virtually no overnight battery drain either when suspending. My system also supports Opal2, so my drive is encrypted transparently to the OS, with virtually no performance overhead. It’s also nice not having to muck around with LUKS and the complexities around it. I use this system for both work and personal use (gaming), and it’s been a great experience so far - both software and hardware. Happy to answer any questions you may have.
There’s also XFCE and LXQt, if you want simple, easy-to-use environments.
My elderly, non-techy mum has been using XFCE over a decade across three different distros (Mint, Xubuntu, Zorin) and her experience has been consistent all these years, with no major issues or complaints. If my mum can use Linux just fine - so can anyone else (who don’t have any specific/complex hw/sw requirements that is). I don’t see how much further intuitive it needs to get.
KDE, Gnome, XFCE, LXQt etc all have their own place and audience. There’s no need to have one experience for all - in fact, that would be a huge detriment, because you can never satisfy everyone with a one-size-fits-all approach. Take a look at Windows itself as an example - the abomination that was the Start Menu in Windows 8 (and the lack of the start button) angered so many, to the point that Microsoft had to backtrack some of those design decisions. Then there was the convoluted mess of Metro and Win32 design elements in Win 10, and finally the divisive new taskbar in Win11… you’re never going to make everyone happy. And this is where Linux shines - all the different DEs and WMs offer a UX that suits a different audience or requirements. And we should continue to foster and encourage the development of these environments. Linux doesn’t need to be like Windows.
What @lemmyreader said, except this is XFCE installed directly on Termux (and accessed via Termux-X11, a native X server for Android). No *buntu involved here. If you have an Android as well, you can set all this up (minus the actual Chicago95 theme) using this script.
- I used OneDrive, and especially the file on-demand (all files on server visible in explorer but only downloaded when needed) feature a lot
You can continue to use OneDrive. I use the OneDriver client and it works really well - your drive appears just like a local drive, but files only get downloaded when you try to access them. Once downloaded, it gets cached locally and is available offline, and is kept in sync automatically. Other cloud providers should have similar FUSE clients available.
- What are best practices for managing apps?
Best practice is to stick to packages provided by your distro’s repos. Flatpak should be your second option if you can’t find your app there, and AppImages should be your third option (since Flatpaks are superior as they can share dependencies, unlike AppImages). Avoid Snap. In fact, avoid any distros that even use Snap (*buntu). Also, if you’re on a Debian/Ububtu based distro, avoid adding PPAs (thirdparty user repositories) as far as possible, as these can cause dependency issues and may cause pain when you upgrade your distro.
Is there a GUI (I know) way to see all applications
That should be provided by your distro - Gnome-based ones have “Software” and KDE-based ones have “Discover”.
Since you’re on Linux, it’s just a matter of installing the right packages from your distros package manager. Lots of articles on the Web, just google your app + “ROCm”. Main thing you gotta keep in mind is the version dependencies, since ROCm 6.0/6.1 was released recently, some programs may not yet have been updated for it. So if your distro packages the most recent version, your app might not yet support it.
This is why many ML apps also come as a Docker image with specific versions of libraries bundled with them - so that could be an easier option for you, instead of manually hunting around for various package dependencies.
Also, chances are that your app may not even know/care about ROCm, if it just uses a library like PyTorch / TensorFlow etc. So just check it’s requirements first.
As for AMD vs nVidia in general, there are a few places mainly where they lagged behind: RTX, compute and super sampling.
For RTX, there has been improvements in performance with the RDNA3 cards, but it does lag behind by a generation. For instance, the latest 7900 XTX’s RTX performance is equivalent to the 3080.
Compute is catching up as I mentioned earlier, and in some cases the performance may even match nVidia. This is very application/library specific though, so you’ll need to look it up.
Super Sampling is a bit of a weird one. AMD has FSR and it does a good job in general. In some cases, it may even perform better since it uses much simpler calculations, as opposed to nVidia’s deep learning technique. And AMD’s FSR method can be used with any card in fact, as long as the game supports it. And therein lies the catch, only something like 1/3rd of the games out there support it, and even fewer games support the latest FSR 3. But there are mods out there which can enable FSR (check Nexus Mods) that you might be able to use. In any case, FSR/DLSS isn’t a critical thing, unless you’re gaming on a 4K+ monitor.
You can check out Tom’s Hardware GPU Hierarchy for the exact numbers - scroll down halfway to read about the RTX and FSR situation.
So yes, AMD does lag behind in nVidia but whether this impacts you really depends on your needs and use cases. If you’re a Linux user though, getting an AMD is a no-brainer - it just works so much better, as in, no need to deal with proprietary driver headaches, no update woes, excellent Wayland support etc.
I based my statements on the actual commits being made to the repo, from what I can see it’s certainly not “floundering”:
In any case, ZLUDA is really just a stop-gap arrangement so I don’t see it being an issue either way - with more and more projects supporting AMD cards, it won’t be needed at all in the near future.
It’s not “optimistic”, it’s actually happening. Don’t forget that GPU compute is a pretty vast field, and not every field/application has a hard-coded dependency on CUDA/nVidia.
For instance, both TensorFlow and PyTorch work fine with ROCm 6.0+ now, and this enables a lot of ML tasks such as running LLMs like Llama2. Stable Diffusion also works fine - I’ve tested 2.1 a while back and performance has been great on my Arch + 7800 XT setup. There’s plenty more such examples where AMD is already a viable option. And don’t forget ZLUDA too, which is being continuing to be improved.
I mean, look at this benchmark from Feb, that’s not bad at all:
And ZLUDA has had many improvements since then, so this will only get better.
Of course, whether all this makes an actual dent in nVidia compute market share is a completely different story (thanks to enterprise $$$ + existing hw that’s already out there), but the point is, at least for many people/projects - ROCm is already a viable alternative to CUDA for many scenarios. And this will only improve with time. Just within the last 6 months for instance there have been VAST improvements in both ROCm (like the 6.0 release) and compatibility with major projects (like PyTorch). 6.1 was released only a few weeks ago with improved SD performance, a new video decode component (rocDecode), much faster matrix calculations with the new EigenSolver etc. It’s a very exiting space to be in to be honest.
So you’d have to be blind to not notice these rapid changes that’s really happening. And yes, right now it’s still very, very early days for AMD and they’ve got a lot of catching up to do, and there’s a lot of scope for improvement too. But it’s happening for sure, AMD + the community isn’t sitting idle.
The best option is to just support the developer/project by the method they prefer the most (ko-fi/patreon/crypto/beer/t-shirts etc).
If the project doesn’t accept any donations but accepts code contributions instead (or you want to develop something that doesn’t exist), you can directly hire a freelancer to work on what you want, from sites like freelancer.com.
Hmm, so I’ve had a look and it seems like Xournal++ only supports x86_64. Which means that if you get the Snapdragon version, you’ll need run it using an x86 emulator like FEXEmu or Box64, and this will affect the performance and may also introduce compatibility issues. So you’ll need to do your own research and find out if someone’s managed to run it on ARM / Snapdragon 7c, and if there’s any issues etc.
You could get the Celeron version instead, but personally I can’t recommend a Celeron to anyone in good faith, so you’ll have to make your own decision sorry.
Forget Linux for a second. What you need to be aware is that both the variants come with only 4GB soldered-on RAM and eMMC storage. That means, even if you do manage to get Linux going on them, it’s going to be super slow for any sort of practical Web/GUI needs. 4GB RAM is barely enough to run a browser these days, and if you tack on a full-fledged DE and multitasking with other apps, you’ll be pushing memory pages to the disk (ie, swapping). And when that happens, you’ll really feel the slowness. Trust me, you don’t want to be swapping to eMMC - that’s super old tech, something like 3x slower than UFS, which in turn a LOT slower than m.2 NVMe (the current standard used in “proper” laptops/convertibles).
Also, consider this for perspective - even budget smartphones these days come with at least 6GB RAM and UFS storage. So this laptop/convertible - a device meant for productivity - is a complete ripoff.
If money is an issue, then just buy a used laptop (from eBay, or whatever you guys use there). If you’re aiming for good Linux compatibility then ThinkPads are a safe bet. But since you’re after a Surface-like device, then you could just get any older Surface device. Why settle for an imitation when you can get the real thing? In any case, most older x86 laptops from mainstream brands should work fine in Linux in general, just do a google for it to see if there are any quirks or issues.
Regardless of your choice, avoid the Duet 3. 4GB RAM is completely unacceptable for a laptop in 2024.
Are you on Wayland? If so, try setting the theme using Nwg-Look instead. If not, stick with LXappearance. Also btw I just found out that LXappearance doesn’t apply GTK settings directly, you’ll need log off and on for the change to take place, if you haven’t done that already.
Also, what’s the DE that you’re using? Because if you’re not on Gnome (from the sounds of it, you’re on LXQt?), you may need to install certain GTK theme engine dependencies like gnome-themes-extra
and gtk-murrine-engine
. Reboot (or logoff/on), and try again.
Also worth trying a different theme such as Breeze or Arc. Maybe try a light variant as well.
If all else fails, open a bug report on the Lutris github.
You won’t find any alternatives because Flatpak has won the war. Pretty much everyone (except Canonical) hates Snap and avoids it like the plague, and AppImages have significantly dropped in popularity amongst users due to the rise of Flatpak, and the various advantage it has over AppImages. So you’re left with only Flatpak/Flathub basically.
skim
has unfortunately been abandoned, there have been no updates in an year, and several old PRs and issues remain untouched. The current recommend fork is two_percent
, which is also more faster and memory efficient.
If you’re facing the same issue with two_percent
as well, you can reach out to the author in this thread, since they haven’t yet opened up their issues tab.
It’s one of the only ways I know of to make a Windows ToGo installation (equivalent of a Linux Live USB),
You can also use WinToUSB for that btw. Yet another option is to install Windows to a VHD file (using a virtual machine, or using Disk2VHD to convert an existing install), then copy it to your USB, and make it bootable using Ventoy. The latter option is more useful, since with Ventoy you could have multiple other Linux ISOs (or other OS/rescue images) all on a single, portable drive.
Those of you reading this might also be interested in two_percent, which is a fork of skim
, which in turn is a Rust implementation of fzf. two_percent is faster, more efficient and uses less memory than fzf
, which is especially noticeable with large inputs.
In that case, I agree with the others and say leave this up to the router - not only is it far more easier to set up, it gives you/your kid the freedom to switch between distros/OSes, and you can even swap computers without worrying about having up the controls all over again.
A friend of mine was in the same situation as you (he’s also a Linux nerd), and he ended up with the router thing, and after extensive research, he decided to get a Synology router as it had all the features he was after (mainly limiting access times, monitoring and reporting). See: https://www.synology.com/en-global/srm/feature/device_content_control
And for extra filtering, you could also set the upstream DNS on the router to a filtering service such as Cloudflare for Families, AdGuard DNS Family etc.
I disagree with @Shareni@programming.dev (sorry!) - the biggest issue right now is that package maintainers are leaving in droves - at least 15 contributors left a few days ago, a number which has likely increased these past few days - and will continue to increase. I think the only people left will be the ones who support Eelco and the toxic culture brewed by him.
What this means is that you risk your packages getting out of date, including slow delivery of security updates (which was already an increasing concern, due to the way the Nixpkgs build system worked). Worst case scenario, some (many?) packages may never even get an update.
So now’s definitely NOT a good time to switch, and in fact I’d also urge existing users to look at other distros, at least temporarily until this whole thing settles down.
doas
is quite popular in the BSD world, and was ported to Linux a few years ago (via the OpenDoas project).
For starters, it’s is a lot smaller than sudo - under 2k lines of code vs sudo’s 132k - this makes it lot more easier to audit and maintain, and technically less likely to have vulnerabilities.
Another security advantage is that doas
doesn’t pass on the environment variables by default (you’d have to explicitly declare the ones you want to pass, which you can do so in the config).
The config is also a lot simpler, and doesn’t force you to use visudo
- which never made sense to me, visudo
should’ve just generated the actual config, instead of checking it after the fact. Kinda like how grubby
or grub2-mkconfig
works. But no need for that complexity with doas.
Eg, the most basic doas
config could just have one line in the file: permit: wheel
. Maybe have another line for programs you want to run without a password, like permit nopass dexter cmd pacman
.
Agreed, this is a nice inclusion. I also hate sudoers with a passion - I already use doas
but it’s not standard (in the Linux world anyway), but with systemd providing an alternative means that it’ll become a standard which most distros would adopt, and I hope this means we can finally ditch the convoluted sudoers file once and for all.
I personally use a ThinkPad Z13 (all AMD; it’s nice but pricey), but I’d recommend getting a Framework (which wasn’t an option for me back then). I think modular and repairable laptops are cool, plus they seem to be well supported by the Linux community.
This was in fact what prompted my search - the Gnome calculator is so horribly bloated, and yeah, it should have no business making network connections, at least not by default - this should be an opt-in behaviour.