• 105 Posts
  • 40 Comments
Joined 1Y ago
cake
Cake day: Jul 28, 2023

help-circle
rss

> Linus Torvalds Speaks on the the divide between Rust and C Linux developers an the future Linux. Will things like fragmentation among the open source community hurt the Linux Kernel? We'll listen to the Creator of Linux. For the full key note, checkout: [Keynote: Linus Torvalds in Conversation with Dirk Hohndel](https://www.youtube.com/watch?v=OM_8UOPFpqE) The Register's summary: [Torvalds weighs in on 'nasty' Rust vs C for Linux debate](https://www.theregister.com/2024/09/19/torvalds_talks_rust_in_linux/)
fedilink

It looks like you are running XFCE instead of GNOME (the normal Ubuntu desktop). I’m not sure how that happened… but you an always just install another desktop.

For instance, you can try to make sure you have the ubuntu-desktop or ubuntu-desktop-minimal metapackage installed:

sudo apt install ubuntu-desktop-minimal

After that, the login manager should allow you to select the Ubuntu session rather than the XFCE one.


Still using mutt after two decades (with isync for fetching).


Yes, based on the diagrams on their blog, it looks like this only impacts Snaps.


From the Discourse Blog:

The Linux desktop provides XDG Desktop Portals as a standardised way for applications to access resources that are outside of the sandbox. Applications that have been updated to use XDG Desktop Portals will continue to use them. Prompting is not intended to replace XDG Desktop Portals but to complement them by providing the desktop an alternative way to ask the user for permission. Either when an application has not been updated to use XDG Desktop Portals, or when it makes access requests not covered by XDG Desktop Portals.

Since prompting works at the syscall level, it does not require an application’s awareness or cooperation to work and extends the set of applications that can be run inside of a sandbox, allowing for a safer desktop. It is designed to enable desktop applications to take full advantage of snap packaging that might otherwise require classic confinement.

So this looks like it complements and not replaces the XDG Desktop Portals, especially for applications that have not implemented the Portals. It allows you to still run those applications in confinement while providing some more granular access controls.


cross-posted from: https://lemmy.ndlug.org/post/1104312 > The upcoming Ubuntu 24.10 operating system promises a new feature called “permissions prompting” for an extra layer of privacy and security. > The new permissions prompting feature in Ubuntu will let users control, manage, and understand the behavior of apps running on their machines. It leverages Ubuntu’s AppArmor implementation and enables fine-grained access control over unmodified binaries without having to change the app’s source code. From Ubuntu Discourse: [Ubuntu Desktop’s 24.10 Dev Cycle - Part 5: Introducing Permissions Prompting](https://discourse.ubuntu.com/t/ubuntu-desktop-s-24-10-dev-cycle-part-5-introducing-permissions-prompting/47963) > This solution consists of two new seeded components in Ubuntu 24.10, prompting-client and desktop-security-center alongside deeper changes to snapd and AppArmor available in the upcoming snapd 2.65. The first is a new prompting client (built in Flutter) that surfaces the prompt requests from the application via snapd. The second is our new Security Center: > In this release the Security Center is the home for managing your prompt rules, over time we will expand its functionality to cover additional security-related settings for your desktop such as encryption management and firewall control. > ... > With prompting enabled, an application that has access to the home interface in its AppArmor profile will trigger a request to snapd to ask the user for more granular permissions at the moment of access: > As a result, users now have direct control over the specific directories and file paths an application has access to, as well its duration. The results of prompts are then stored in snapd so they can be queried and managed by the user via the Security Center.
fedilink

Linux’s Bedtime Routine
> How does Linux move from an awake machine to a hibernating one? How does it then manage to restore all state? These questions led me to read way too much C in trying to figure out how this particular hardware/software boundary is navigated.
fedilink

> elementary OS may not be as much as popular as it used to be. > That being said, elementary OS 8 release is still on the horizon with some useful changes based on Ubuntu 24.04 LTS. ... > However, amidst disagreement between co-founders during the pandemic in 2022, co-founder Cassidy quit the elementary OS team. > Right after that, the development pace took a big hit, and we saw elementary OS 7 being released almost a year after Ubuntu 22.04 LTS came up. ... > A good indicator about its development activity is its upcoming major release, elementary OS 8, based on Ubuntu 24.04 LTS. > I took a sneak peek at it using the daily build, and elementary OS 8 is almost ready to have an RC release. ... > You can expect things like: > - The settings app handles system updates (instead of AppCenter) > - AppCenter is now Flatpak only > - New toggle menu icon giving you easy access to the screen reader, onscreen keyboard, font size, and other system settings > - WireGuard VPN support
fedilink

I think you meant Pop!_OS (is developed by System76). TuxedoOS is developed by Tuxedo Computers, which is a European Linux focused hardware company.

That said, the point stands… there are hardware companies making Linux supported devices.


> I have completed an initial new port of systemd to musl. This patch set does not share much in common with the existing OpenEmbedded patchset. I wanted to make a fully updated patch series targeting more current releases of systemd and musl, taking advantage of the latest features and updates in both. I also took a focus on writing patches that could be sent for consideration of inclusion upstream. > The final result is a system that appears to be surprisingly reliable considering the newness of the port, and very fast to boot. ... > And that is how I became the first person alive to see systemd passing its entire test suite on a big-endian 64-bit PowerPC musl libc system. ... > While the system works really well, and boots in 1/3rd the time of OpenRC on the same system, it isn’t ready for prime time just yet. ... > There aren’t any service unit files written or packaged yet, other than OpenSSH and utmps. We are working with our sponsor on an effort to add -systemd split packages to any of the packages with -openrc splits. We should be able to rely on upstream units where present, and lean on Gentoo and Fedora’s systemd experts to have good base files to reference when needed. I’ve already landed support for this in abuild. This work is part of [Adélie Linux](https://www.adelielinux.org/)
fedilink

> For those unfamiliar with it, power-profiles-daemon is a low-level component to provide power handling over DBus. Ever used the Power Mode options in the Quick Settings menu in GNOME Shell? Those options interface through this. From [0.22 Release Notes](https://gitlab.freedesktop.org/upower/power-profiles-daemon/-/releases/0.22): > Since this release power-profiles-daemon is also battery-level aware and some drivers use this value to be smarter at tuning their optimizations. In particular both the AMD panel power action now uses a progressive approach, changing the the ABM based on the battery percentage. > AMD p-state received various features and improvements: > - it supports core performance boost when not in power-saver mode. > - uses minimum frequency to lowest non-linear frequency > - it is more impervious to faulty firmware and kernel bugs This should be included in the upcoming Ubuntu 24.10 release.
fedilink

The Insecurity of Debian
> There has been a steady uptick of people stating that they will migrate (or already have) to Debian – seeking refuge from what they see as greedy corporate influence. I understand the sentiment fully. However, there’s a problem here that I want to talk about: security. > The ugly truth is that security is hard. It’s tedious. Unpleasant. And requires a lot of work to get right. > Debian does not do enough here to protect users. > Long ago, Red Hat embraced the usage of SELinux. And they took it beyond just enabling the feature in their kernel. They put in the arduous work of crafting default SELinux policies for their distribution. ... > However, its default security framework leaves much to be desired. Debian’s decision to enable AppArmor by default starting with version 10 signifies a positive step towards improved security, yet it falls short due to the half-baked implementation across the system. ... > The fundamental difference between AppArmor and SELinux lies in their approach to Mandatory Access Control (MAC). AppArmor operates on a path-based model, while SELinux employs a significantly more complex type enforcement system. This distinction becomes particularly evident in container environments. ... > The practical implications of these differences are significant. In a SELinux environment, a compromised container faces substantial hurdles in accessing or affecting the host system or other containers, thanks to the dual barriers of type enforcement and MCS labels. TLDR: According to the author, Debian's use of AppArmour is not as effective as RedHat's use of SELinux when it comes to security.
fedilink

> And Linux isn't minimal effort. It's an operating system that demands more of you than does the commercial offerings from Microsoft and Apple. Thus, it serves as a dojo for understanding computers better. With a sensei who keeps demanding you figure problems out on your own in order to learn and level up. ... > That's why I'd love to see more developers take another look at Linux. Such that they may develop better proficiency in the basic katas of the internet. Such that they aren't scared to connect a computer to the internet without the cover of a cloud. Related: [Omakub](https://omakub.org/)
fedilink

#163 Public Transit · This Week in GNOME
cross-posted from: https://lemmy.ndlug.org/post/1050940
fedilink

Rust for Linux revisited (by Drew DeVault)
> In practice, the Linux community is the wild wild west, and sweeping changes are infamously difficult to achieve consensus on, and this is by far the broadest sweeping change ever proposed for the project. Every subsystem is a private fiefdom, subject to the whims of each one of Linux’s 1,700+ maintainers, almost all of whom have a dog in this race. It’s herding cats: introducing Rust effectively is one part coding work and ninety-nine parts political work – and it’s a lot of coding work. Every subsystem has its own unique culture and its own strongly held beliefs and values. > The consequences of these factors is that Rust-for-Linux has become a burnout machine. My heart goes out to the developers who have been burned in this project. It’s not fair. Free software is about putting in the work, it’s a classical do-ocracy… until it isn’t, and people get hurt. In spite of my critiques of the project, I recognize the talent and humanity of everyone involved, and wouldn’t have wished these outcomes on them. I also have sympathy for many of the established Linux developers who didn’t exactly want this on their plate… but that’s neither here nor there for the purpose of this post, and any of those developers and their fiefdoms who went out of their way to make life difficult for the Rust developers above and beyond what was needed to ensure technical excellence are accountable for these shitty outcomes. ... > Here’s the pitch: a motivated group of talented Rust OS developers could build a Linux-compatible kernel, from scratch, very quickly, with no need to engage in LKML politics. You would be astonished by how quickly you can make meaningful gains in this kind of environment; I think if the amount of effort being put into Rust-for-Linux were applied to a new Linux-compatible OS we could have something production ready for some use-cases within a few years. ... > Having a clear, well-proven goal in mind can also help to attract the same people who want to make an impact in a way that a speculative research project might not. Freeing yourselves of the LKML political battles would probably be a big win for the ambitions of bringing Rust into kernel space. Such an effort would also be a great way to mentor a new generation of kernel hackers who are comfortable with Rust in kernel space and ready to deploy their skillset to the research projects that will build a next-generation OS like Redox. The labor pool of serious OS developers badly needs a project like this to make that happen. Follow up to: [One Of The Rust Linux Kernel Maintainers Steps Down - Cites "Nontechnical Nonsense"](https://lemmy.ml/post/19699109), [On Rust, Linux, developers, maintainers](https://lemmy.ml/post/19732421), and [Asahi Lina's experience about working on Rust code in the kernel](https://lemmy.ml/post/19706020)
fedilink

> Even before the Bcachefs file-system driver was accepted into the mainline kernel, Debian for the past five years has offered a "bcachefs-tools" package to provide the user-space programs to this copy-on-write file-system. It was simple at first when it was simple C code but since the Bcachefs tools transitioned to Rust, it's become an unmaintainable mess for stable-minded distribution vendors. As such the bcachefs-tools package has now been orphaned by Debian. From John Carter's blog, [Orphaning bcachefs-tools in Debian](https://jonathancarter.org/2024/08/29/orphaning-bcachefs-tools-in-debian/): > "So, back in April the Rust dependencies for bcachefs-tools in Debian didn’t at all match the build requirements. I got some help from the Rust team who says that the common practice is to relax the dependencies of Rust software so that it builds in Debian. So errno, which needed the exact version 0.2, was relaxed so that it could build with version 0.4 in Debian, udev 0.7 was relaxed for 0.8 in Debian, memoffset from 0.8.5 to 0.6.5, paste from 1.0.11 to 1.08 and bindgen from 0.69.9 to 0.66. > I found this a bit disturbing, but it seems that some Rust people have lots of confidence that if something builds, it will run fine. And at least it did build, and the resulting binaries did work, although I’m personally still not very comfortable or confident about this approach (perhaps that might change as I learn more about Rust). > With that in mind, at this point you may wonder how any distribution could sanely package this. The problem is that they can’t. Fedora and other distributions with stable releases take a similar approach to what we’ve done in Debian, while distributions with much more relaxed policies (like Arch) include all the dependencies as they are vendored upstream." ... > With this in mind (not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit), I decided to remove bcachefs-tools from Debian completely. Although after discussing this with another DD, I was convinced to orphan it instead, which I have now done.
fedilink

> Wedson Almeida Filho is a Microsoft engineer who has been prolific in his contributions to the Rust for the Linux kernel code over the past several years. Wedson has worked on many Rust Linux kernel features and even did a experimental EXT2 file-system driver port to Rust. But he's had enough and is now stepping away from the Rust for Linux efforts. From Wedon's post on the [kernel mailing list](https://lore.kernel.org/lkml/20240828211117.9422-1-wedsonaf@gmail.com/): > I am retiring from the project. After almost 4 years, I find myself lacking the energy and enthusiasm I once had to respond to some of the nontechnical nonsense, so it's best to leave it up to those who still have it in them. ... > I truly believe the future of kernels is with memory-safe languages. I am no visionary but if Linux doesn't internalize this, I'm afraid some other kernel will do to it what it did to Unix. > Lastly, I'll leave a small, 3min 30s, sample for context here: https://youtu.be/WiPp9YEBV0Q?t=1529 -- and to reiterate, no one is trying force anyone else to learn Rust nor prevent refactorings of C code."
fedilink

> As part of a massive migration campaign, LinkedIn has successfully moved their operations to Microsoft's Azure Linux as of April 2024, ditching CentOS 7 in the process and taking advantage of a more modern compute platform. > As many of you might already know, back on June 30, 2024, CentOS 7 reached the end-of-life status, resulting in no new future updates for it, including fixes for critical security vulnerabilities. ... > The developers have gone with the high-performing XFS filesystem, which was made to work with Azure Linux to fit LinkedIn's use case. In their testing, they found that XFS was performing well for most of their applications, except Hadoop, which is used for their analytics workloads. > When they compared the issues that cropped up, XFS came out as a more stable and reliable choice than the other candidate, Ext4. ... > Additionally, LinkedIn's MaaS (Metal-as-a-Service) team has developed a new Azure Linux Image Customizer tool for automating image generation, that takes an existing generic Azure Linux image, and modifies it to use with a given scenario. In this case, a tailored image for LinkedIn. LinkedIn Engineering Blog: [Navigating the transition: adopting Azure Linux as LinkedIn’s operating system](https://www.linkedin.com/blog/engineering/architecture/navigating-the-transition-adopting-azure-linux-as-linkedins-operatingsystem)
fedilink

Paying for software is stupid… 10 free and open-source SaaS replacements
> Remember, for every paid SaaS, there is a free open-source self-hosted alternative. Let's take a look at 10 FOSS tools designed to replace popular tools like MS Office, Notion, Heroku, Vercel, Zoom, Adobe, and more. ... > ⭐ Repos mentioned > - [LibreOffice](https://github.com/LibreOffice) > - [Mattermost](https://github.com/mattermost/mattermost) > - [Nocodb](https://github.com/nocodb/nocodb) > - [Plane](https://github.com/makeplane/plane) > - [Appflowy](https://github.com/AppFlowy-IO/AppFlowy) > - [Jitsi](https://github.com/jitsi) > - [ERPNext](https://github.com/frappe/erpnext) > - [Coolify](https://github.com/coollabsio/coolify) > - [Dokku](https://github.com/dokku/dokku) > - [Instant](https://github.com/instantdb/instant)
fedilink

cross-posted from: https://lemmy.ndlug.org/post/1040526 > A judge has dismissed the majority of claims in a copyright lawsuit filed by developers against GitHub, Microsoft, and OpenAI. > The lawsuit was initiated by a group of developers in 2022 and originally made 22 claims against the companies, alleging copyright violations related to the AI-powered GitHub Copilot coding assistant. > Judge Jon Tigar’s ruling, unsealed last week, leaves only two claims standing: one accusing the companies of an open-source license violation and another alleging breach of contract. This decision marks a substantial setback for the developers who argued that GitHub Copilot, which uses OpenAI’s technology and is owned by Microsoft, unlawfully trained on their work. ... > Despite this significant ruling, the legal battle is not over. The remaining claims regarding breach of contract and open-source license violations are likely to continue through litigation.
fedilink

cross-posted from: https://lemmy.ndlug.org/post/1040526 > A judge has dismissed the majority of claims in a copyright lawsuit filed by developers against GitHub, Microsoft, and OpenAI. > The lawsuit was initiated by a group of developers in 2022 and originally made 22 claims against the companies, alleging copyright violations related to the AI-powered GitHub Copilot coding assistant. > Judge Jon Tigar’s ruling, unsealed last week, leaves only two claims standing: one accusing the companies of an open-source license violation and another alleging breach of contract. This decision marks a substantial setback for the developers who argued that GitHub Copilot, which uses OpenAI’s technology and is owned by Microsoft, unlawfully trained on their work. ... > Despite this significant ruling, the legal battle is not over. The remaining claims regarding breach of contract and open-source license violations are likely to continue through litigation.
fedilink

Microsoft donates the Mono Project to the Wine team
> The Mono Project (mono/mono) (‘original mono’) has been an important part of the .NET ecosystem since it was launched in 2001. Microsoft became the steward of the Mono Project when it acquired Xamarin in 2016. > The last major release of the Mono Project was in July 2019, with minor patch releases since that time. The last patch release was February 2024. > We are happy to announce that the WineHQ organization will be taking over as the stewards of the Mono Project upstream at wine-mono / Mono · GitLab (winehq.org). Source code in existing mono/mono and other repos will remain available, although repos may be archived. Binaries will remain available for up to four years. > Microsoft maintains a modern fork of Mono runtime in the dotnet/runtime repo and has been progressively moving workloads to that fork. That work is now complete, and we recommend that active Mono users and maintainers of Mono-based app frameworks migrate to .NET which includes work from this fork. > We want to recognize that the Mono Project was the first .NET implementation on Android, iOS, Linux, and other operating systems. The Mono Project was a trailblazer for the .NET platform across many operating systems. It helped make cross-platform .NET a reality and enabled .NET in many new places and we appreciate the work of those who came before us. > Thank you to all the Mono developers! Explanation of the differences between all the versions of mono from a [Hacker News comment](https://news.ycombinator.com/item?id=41372086)
fedilink

> Greetings everyone. It is with much regret that I am writing this post. A plugin, ss-otr, was added to the third party plugins list on July 6th. On August 16th we received a report from 0xFFFC0000 that the plugin contained a key logger and shared screen shots with unwanted parties. > We quietly pulled the plugin from the list immediately and started investigating. On August 22nd Johnny Xmas was able to confirm that a keylogger was present.
fedilink

> There's been some Friday night kernel drama on the Linux kernel mailing list... Linus Torvalds has expressed regrets for merging the Bcachefs file-system and an ensuing back-and-forth between the file-system maintainer.
fedilink

> Forgejo is changing its license to a Copyleft license. This blog post will try to bring clarity about the impact to you, explain the motivation behind this change and answer some questions you might have. ... > Developers who choose to publish their work under a copyleft license are excluded from participating in software that is published under a permissive license. That is at the opposite of the core values of the Forgejo project and in June 2023 it was decided to also accept copylefted contributions. A year later, in August 2024, the first pull request to take advantage of this opportunity was proposed and merged. ... > Forgejo versions starting from v9.0 are now released under the GPL v3+ and earlier Forgejo versions, including v8.0 and v7.0 patch releases remain under the MIT license.
fedilink

What the fuck is an SBAT and why does everyone suddenly care
Follow up to: [“Something has gone seriously wrong,” dual-boot systems warn after Microsoft update](https://lemmy.ml/post/19392094) > SBAT was developed collaboratively between the Linux community and Microsoft, and Microsoft chose to push a Windows update that told systems not to trust versions of grub with a security generation below a certain level. This was because those versions of grub had genuine security vulnerabilities that would allow an attacker to compromise the Windows secure boot chain, and we've seen real world examples of malware wanting to do that (Black Lotus did so using a vulnerability in the Windows bootloader, but a vulnerability in grub would be just as viable for this). Viewed purely from a security perspective, this was a legitimate thing to want to do. ... > The problem we've ended up in is that several Linux distributions had not shipped versions of grub with a newer security generation, and so those versions of grub are assumed to be insecure (it's worth noting that grub is signed by individual distributions, not Microsoft, so there's no externally introduced lag here). Microsoft's stated intention was that Windows Update would only apply the SBAT update to systems that were Windows-only, and any dual-boot setups would instead be left vulnerable to attack until the installed distro updated its grub and shipped an SBAT update itself. Unfortunately, as is now obvious, that didn't work as intended and at least some dual-boot setups applied the update and that distribution's Shim refused to boot that distribution's grub. ... > The outcome is that some people can't boot their systems. I think there's plenty of blame here. Microsoft should have done more testing to ensure that dual-boot setups could be identified accurately. But also distributions shipping signed bootloaders should make sure that they're updating those and updating the security generation to match, because otherwise they're shipping a vector that can be used to attack other operating systems and that's kind of a violation of the social contract around all of this.
fedilink

> The Linux operating system has reached a notable milestone in desktop market share, according to the latest data from [StatCounter](https://gs.statcounter.com/os-market-share/desktop/worldwide). As of July 2024, Linux has achieved a 4.45% market share for desktop operating systems worldwide. > While this percentage might seem small to those unfamiliar with the operating system landscape, it represents a significant milestone for Linux and its dedicated community. What makes this achievement even more thrilling is the upward trajectory of Linux's adoption rate. ... > According to the statistics from the past ten years, It took eight years for Linux to go from a 1% to 2% market share (April 2021), 2.2 years to climb from 2% to 3% (June 2023), and a mere 0.7 years to reach 4% from 3% (February 2024). This exponential growth pattern suggests that 2024 might be the year Linux reaches a 5% market share.
fedilink

> Last Tuesday, loads of Linux users—many running packages released as early as this year—started reporting their devices were failing to boot. Instead, they received a cryptic error message that included the phrase: “Something has gone seriously wrong.” > The cause: an update Microsoft issued as part of its monthly patch release. It was intended to close a 2-year-old vulnerability in GRUB, an open source boot loader used to start up many Linux devices. The vulnerability, with a severity rating of 8.6 out of 10, made it possible for hackers to bypass secure boot, the industry standard for ensuring that devices running Windows or other operating systems don’t load malicious firmware or software during the bootup process. CVE-2022-2601 was discovered in 2022, but for unclear reasons, Microsoft patched it only last Tuesday. ... > The reports indicate that multiple distributions, including Debian, Ubuntu, Linux Mint, Zorin OS, Puppy Linux, are all affected. Microsoft has yet to acknowledge the error publicly, explain how it wasn’t detected during testing, or provide technical guidance to those affected. Company representatives didn’t respond to an email seeking answers.
fedilink

+1 For xournal++. That is what I usually use for annotating slides and drawing with my wacom tablet.


I agree that the amount of work for many students can get quite out of hand and to be honest when I first started teaching, I was pretty guilty of having very work intensive courses.

That said, over the years, I’ve worked to streamline my courses to only have what I believe to be absolutely critical to learning and have added a lot of scaffolding and automated tests (for immediate results). In general, I try to have no busy work and make sure everything assignment is meaningful (as much as it can be anyway).

Additionally, because I understand that sometimes life happens, I have built-in facilities for automate extensions for assignments and even have a system for dropping certain homeworks.

This not to say that there isn’t work in my classes… it’s just that the work is intended to be relevant and reasonable, which most students seem to agree with these days.

I think students should be expected to work less over a longer period of time.

I think this would be a great idea. Or rather, I think it would be great to allow students to learn at different rates… some may want to go faster, some may want or need to go slower.

I think the modern course-based education system is often too rigid and not flexible enough to adequately accommodate the needs of students with different experience levels, resources, or constraints. Something like a Montessori model would be a lot better IMHO.


First off, 10 is an integer square root. Of 100.

Right, what I was trying to say is that 10 itself is not a perfect square. You cannot take the square root of 10 and get an integer (ie. 1, 4, 9, 16, 25, etc.).

I was told by multiple English teachers (including the head of the department) that I was a math student and should never attempt to write because I saw through the regurgitation assignments, didn’t agree with teacher assessments of what Dickens “was trying to do” and had zero interest in confirming their biases.

I think that is unfortunate and probably inappropriate. I try to avoid classifying students as particular types and generally try to encourage them whenever possible to pursue whatever their interests are (even if I disagree or don’t have the same interest myself).

College coursework on the whole is a waste of time reinventing wheels. I don’t need to spend a couple of weeks working up to “Hello, world!” in C and as such left CS as a major my first quarter at uni.

There is a reason for reinventing wheels; it is to understand why they are round and why they are so effective. To build the future, it helps to understand the past.

That said, perhaps the course was too slow for you, which is understandable… I frequently hear that about various classes (including ones I’ve taught).

But teachers do this shit every day, year after year, and we blindly say they’re doing important work even as they discourage people from finding their path and voice, because god forbid a 16-year-old challenges someone in their 50s.

Again, I think you’ve had an unfortunate experience and I think it’s a good thing to challenge your teachers. I certainly did when I was a student and I appreciate it now when students do that with me. I recognize that I am not perfect nor do I know everything. I make mistakes and can be wrong.

I wish you had a more supportive environment in secondary school and I have a better understanding of your perspective. Thanks for the dialogue.


Sure, some people acquire the capability through repetition. But all that matters in the end is if you are capable or not.

I guess the question is how do you develop that capability if you are cheating or using a tool to do things for you? If I use GrubHub to order food or pay someone else to cook for me, does it make sense to say I can cook? After all, I am capable of acquiring cooked food even though I didn’t actually do any of the work nor do I understand how to well, actually make food.

The how is relevant if you are trying to actually learn and develop skills, rather than simply getting something done.

No, the point is to get an irrelevant piece of paper that in the end doesn’t actually indicate a persons capabilities.

Perhaps the piece of paper doesn’t actually indicate a person’s capabilities in part because enough students cheat to the point where getting a degree is meaningless. I do not object to that assessment.

Look, I’m not arguing that schooling is perfect. It’s not. Far from it. All I am saying is that if your goal is to actually learn and grow in skill, development, and understanding, then there is no shortcut. You have to do the work.


Sure. If you do enough basic math, you start to see things like how 2/8 can be simplified to 1/4 or you recognize that 10 is not a perfect square root or how you could reorder some operations to make things easier (sorry, examples from my kids). Little things like that where you don’t even think about it… it becomes second nature to you and that makes you a lot faster because you are not worrying about those basic ideas or mechanics. Instead, you can think about more complicated things such as which formulas to apply or the process to compute something.

As another example, since I teach computer science, a lot of novice students struggle with basic programming language syntax… How exactly do you declare a variable? What order do things go? How does a for loop work? Do you need a semicolon or parentheses, etc. If you do enough programming, however, these things become second nature and you stop thinking about it. You just seemily, intuitively, know these things and do them naturally without thinking, even though when you first started, it was really complicated and daunting and you probably spent a lot of time constructing a single line of code.

Once you develop a foundation however, you don’t need to worry about these low-level things. Instead you worry about high-level issues such as how to organize larger pieces of code into functions or how to I utilize different paradigums, etc.

This is why a basketball player, for instance, will shoot thousands of shots in practice or why a piano player will play a piece over and over for many hours. It’s so they don’t have to think about the low-level mechanics. It becomes muscle memory and it’s just natural to them.

I hope that makes sense.


Thanks for the thoughtful response.

Using AI to answer a question is not necessarily preventing yourself from learning and developing mastery and understanding. The use of AI is a skill in the same way that any ability to look up information is a skill. But blindly putting information into an AI and copy/pasting the results is very different from using AI as a resource in a similar way one might use a book or an article as a resource.

I generally agree. That’s why I’m no longer banning AI in my courses. I’m allowing students to use AI to explain concepts, help debug, or as a reference. As a resource or learning aid, it’s fine or possibly even great for students.

However, I am not allowing students to generate solutions, because that is harmful and doesn’t help with learning. They still need to do the work and go through the process, AI assisted or not.

This is a particularly long winded way of pointing out something that’s always been true - the idea that you should learn how to do math in your head because ‘you won’t always have a calculator’ or that the idea that you need to understand how to do the problem in your head or how the calculator is working to understand the material is a false one and it’s one that erases the complexity of modern life. Practicing the process helps you learn a specific skill in a specific context and people who make use of existing systems to bypass the need of having that skill are not better or worse - they are simply training a different skill.

I disagree with your specific example here. You should learn to do math in your head because it helps develop intuition of the relationship between numbers and the various mathematical operations. Without a foundational understanding of how to do the basics manually, it becomes very difficult to tackle more complicated problems or challenges even with a calculator. Eventually, you do want to graduate to using a calculator because it is more efficient (and probably more accurate), but you will be able to use it much more effectively if you have a strong understanding numbers and how the various operations work.

Your overall point about how a tool is used being important is true and I agree that if used wisely, AI or any other tool can be a good thing. That said, from my experience, I find that many students will take the easy way out and do as you noted at the top: “blindly putting information into an AI and copy/pasting the results”.


The how is irrelevant.

What I usually tell students is that homework and projects are learning opportunities. The point isn’t for them to produce a particular artifact; it’s to go through the process and develop skills along the way. For instance, I do not need a program that can sort numbers… I can do that myself and there are a gazillion instances of that. However, students should do that assignment to practice learning how to code, how to debug, how to think through problems, and much more. The point isn’t the sorting program… it’s the process and experience.

How do you get better at say gymnastics? You do a bunch of exercises and skills, over and over.

How do you get better at say playing the guitar? You play a lot songs, over and over.

How do you get better at say writing? You write a lot, some good, some bad, over and over.

To get better at anything, you need to do the thing, a lot. You need to build intuition and muscle memory. Taking shortcuts prevents that and in the long run, hurts your learning and growth.

So viewing homeworks as just about the artifact you submit is missing the point and short-sighted. Cheating, whether using AI or not, is preventing yourself from learning and developing mastery and understanding.


Maybe. It is true that people who would have cheated in the past are now just using AI in addition to the previous means. But from my experience teaching, the number of students cheating is also increasing because of how prevalent AI has become and how easy it is to use it.

AI has made cheating more frictionless, which means that a student who might not have say used Chegg (requires some effort) or copied a friend (requires social interaction) in the past, can now just open a textbox and get a solution without much effort. LLMs have made cheating much easier, quicker, and safer (people regularly get caught using Chegg or copying other people, AI cheating can be much harder to detect). It is a huge temptation where the [short-term] benefits can greatly dwarf the risks.


2GB Raspberry Pi 5 on sale now at $50
cross-posted from: https://lemmy.ndlug.org/post/1001830 >> Today, we’re happy to announce the launch of the 2GB Raspberry Pi 5, built on a cost-optimised D0 stepping of the BCM2712 application processor, and priced at just $50. > > > The new D0 stepping strips away all that unneeded functionality, leaving only the bits we need. From the perspective of a Raspberry Pi user, it is functionally identical to its predecessor: the same fast quad-core processor; the same multimedia capabilities; and the same PCI Express bus that has proven to be one of the most exciting features of the Raspberry Pi 5 platform. However, it is cheaper to make, and so is available to us at somewhat lower cost. And this, combined with the savings from halving the memory capacity, has allowed us to take $10 out of the cost of the finished product. > > > So, while our most demanding users — who want to drive dual 4Kp60 displays, or open a hundred browser tabs, or compile complex software from source — will probably stick with the existing higher memory-capacity variants of Raspberry Pi 5, many of you will find that this new, lower-cost variant works perfectly well for your use cases.
fedilink

> Archinstall 2.8.2 menu-based installer for the Arch Linux distribution is now available with various improvements and bug fixes. > Work continues on the new Curses-based menu with support for text input menu, scrolling functionality for previews, and more.
fedilink

I currently use Ubuntu for all my machines (desktops, laptops, and servers), but I used to use Void Linux on my machines for about 6 years, including on a couple of VPSes. Since you are familiar with Void Linux, you could stick with that and just use Docker/Podman for the individual services such as Matrix, Mastodon, etc.

In regards to Debian, while the packages are somewhat frozen, they do get security updates and backports by the Debian security team:

https://www.debian.org/security/

There is even a LTS version of Debian that will continue backporting security updates:

https://www.debian.org/lts/

Good luck!


Not a bad list. Off the top of my head, I would say it is missing two things:

  1. Discrete Math (formal logic, sets, probability, etc)
  2. Theory of Computing (not just algorithms, but things like Turing machines, NFAs, DFAs, etc.). These may not be strictly the most practical courses, but I think a Computer Science degree would be incomplete without these.

The “Introduction to Operating Systems” link no longer works (redirects to “Autonomous Systems” courses). Instead, I would recommend using Operating Systems: Three Easy Pieces, which is the textbook I use in my OS course.

Finally, something like The Missing Semester of Your CS Education would also be a nice extra.


cross-posted from: https://lemmy.ndlug.org/post/988335 >> The Sovereign Tech Fund is piloting a fellowship program to pay open source maintainers, aiming to address structural issues and support open digital infrastructure in the public interest. > > > Over the past two years, STF has successfully contracted over 40 FOSS projects, enhancing their technical sustainability through targeted milestones. While some contracts are with individual maintainers, most involve software development companies or foundations. Despite this success, a new and innovative program is needed to acknowledge the lived reality of how many maintainers work: stretched across multiple technologies, multi-faceted, and often behind the scenes. > > > Most maintainers are unpaid, working in their spare time, which both impacts projects’ stability and can lead to stress and burnout. The Tidelift Open Source Maintainer Study found that 59% of maintainers have quit or considered quitting, posing a risk to the digital infrastructure we all rely on. To even begin to mitigate this risk, it's crucial to understand the role of maintainers, who typically lead and oversee project development, review changes, manage community interactions, release updates, and fix security issues. > > > The application phase will start by the end of the third quarter of 2024, and with the goal that selected maintainers can begin the fellowship in the fourth quarter. The first fellowship pilot will run throughout 2025, and we will evaluate it on an ongoing basis. Based on these evaluations, our experiences running the fellowship, and feedback from participants, we’ll determine how to expand and grow the program for a stronger and healthier open source ecosystem.
fedilink

> The Sovereign Tech Fund is piloting a fellowship program to pay open source maintainers, aiming to address structural issues and support open digital infrastructure in the public interest. > Over the past two years, STF has successfully contracted over 40 FOSS projects, enhancing their technical sustainability through targeted milestones. While some contracts are with individual maintainers, most involve software development companies or foundations. Despite this success, a new and innovative program is needed to acknowledge the lived reality of how many maintainers work: stretched across multiple technologies, multi-faceted, and often behind the scenes. > Most maintainers are unpaid, working in their spare time, which both impacts projects’ stability and can lead to stress and burnout. The Tidelift Open Source Maintainer Study found that 59% of maintainers have quit or considered quitting, posing a risk to the digital infrastructure we all rely on. To even begin to mitigate this risk, it's crucial to understand the role of maintainers, who typically lead and oversee project development, review changes, manage community interactions, release updates, and fix security issues. > The application phase will start by the end of the third quarter of 2024, and with the goal that selected maintainers can begin the fellowship in the fourth quarter. The first fellowship pilot will run throughout 2025, and we will evaluate it on an ongoing basis. Based on these evaluations, our experiences running the fellowship, and feedback from participants, we’ll determine how to expand and grow the program for a stronger and healthier open source ecosystem.
fedilink

Debian Day 2024 Party Events
> The Debian Project was officially founded by Ian Murdock on **1993-08-16**. The Debian Community celebrates its birthday, Debian Day, on this day each year. (It has also been called *Debian Appreciation Day*.) > Debian Day is celebrated each year on **August 16th**.
fedilink

> Switching away from Ubuntu, again, to try Archcraft. Here's what I think! > Archcraft is for Linux users who want a pre-configured window manager with a unique look out of the box. You get a pretty theme setup, but you can choose from a couple of pre-installed options (10 free themes) as well. > You can pick other window managers like Sway, Wayland desktop session, and unlock access to extra themes on Ko-fi by supporting the developer. So, some can call it a freemium model, and I do not mind that, considering you are paying the dev to give you a refined pre-configured experience, saving all the time to set it up yourself. > But, of course, nothing is ever perfect. Everything has flaws. It is you who pick what flaws you can live with, and what you can't.
fedilink

> We're back with some new milestones thanks to the continued growth of Flathub as an app store and the incredible work of both our largely volunteer team and our growing app developer community: > - 70% of the most popular apps are verified > - 100+ curated quality apps > - 4 million active users > - Over 2 billion downloads
fedilink

Announcing Lix 2.91 “Dragon’s Breath”
> Lix is a Nix implementation focused on reliability, predictability, friendliness, developed by a community of people from around the world. We have long term plans to incrementally evolve Nix to work in more places, to make it more reliable and secure, and to update the language and semantics to correct past mistakes and reduce errors, all the while providing an amazing tooling experience.
fedilink

For higher level widgets in ncurses, you can use a library like textual or urwid.


I’m not so sure… for the following reasons:

  1. Despite using a version of the Linux kernel in ChromeOS, Chromebooks don’t always have the best hardware (ie. driver) support from the mainline kernel used by most distributions. That’s why there are niche distributions like GalliumOS which provide tweaks to support the touchpad and audio devices in many Chromebooks. It’s similar to how Android is Linux, but it’s not standard Linux as we are familiar with (so the hardware support is different).

  2. Many Chromebooks have really poor specs: low-wattage CPUs, small amounts of storage, low amounts of RAM. While they may be newer, they are actually probably less performant than older laptops. This has changed in recent years with the new Chromebook plus program (or whatever it is called) which mandates a reasonable set of baseline features, but that is talking about current Chromebooks and not the ones from the COVID era.

  3. Related to the previous point, many Chromebooks are not serviceable or upgradeable while Thinkpads and some recent laptops are. You are unlikely to open up a Chromebook and be able to replace say the RAM or SSD, which would be a show stopper for a lot of people that like Thinkpads.

So… unfortunately, I think this take is a bit of a miss and I dont’ really see it happening. I would be happy to be proven wrong though since my kids have two Chromebooks from the COVID era :}


cross-posted from: https://lemmy.ndlug.org/post/970397 >> The first Ubuntu 24.04 point release won’t be released this week, as initially planned. > > > Ubuntu developers had been aiming to release Ubuntu 24.04.1 LTS on Thursday, August 19th, but has been delayed due to ‘high-impact upgrade bugs’. > > > As a result, Ubuntu 24.04.1 LTS is now due for release on Thursday, August 29th, two weeks later than initially planned. > > - Official Announcement: [Ubuntu 24.04.1 point-release delayed until August 29](https://discourse.ubuntu.com/t/ubuntu-24-04-1-point-release-delayed-until-august-29/47110/1) > > - Tracking Document: [Noble Numbat (24.04.1 LTS) Point-Release Status Tracking](https://discourse.ubuntu.com/t/noble-numbat-24-04-1-lts-point-release-status-tracking/46972)
fedilink

If you want something graphical to install a single deb, you can install gdebi:

https://itsfoss.com/gdebi-default-ubuntu-software-center/

With this installed, anytime you download a deb, it will open the deb in gdebi and allow you to install the package graphically.


No, most likely Pipewire would be used to implement the protocol for various compositors.

Think of the protocols as high-level descriptions of interfaces (or designs) that specify what needs to be implemented to support a particular feature (in this case capturing images of a “screen”). Looking at this one, it describes a ext_image_capture_source_v1 object that has various methods such as create_source and destroy. Different compositors could then implement or support this interface with whatever technology they wish (most will rely on Pipewire).

This is already the case with the existing screensharing protocol. For instance wlroots uses pipewire buffers in xdg-desktop-portal-wlr.


cross-posted from: https://lemmy.ndlug.org/post/965607 >> GNOME 46.4 is now available as the fourth maintenance update to the latest GNOME 46 desktop environment series with various bug fixes. > > > GNOME 46.4 is here a month after GNOME 46.3 with improvements for connecting to WPA2 enterprise networks, glitches in the looking-glass effect, Persian on-screen keyboard layout, overview startup notification, keyboard navigation in app folders, and nested popovers on Wayland. > > Release Announcement: [GNOME 46.4 Released](https://discourse.gnome.org/t/gnome-46-4-released/22718)
fedilink

> The ext-image-capture-source-v1 and ext-image-copy-capture-v1 screen copy protocols build upon wlroots' wlr-screencopy-unstable-v1 with various improvements for better screen capture support under Wayland. These new protocols should allow for better performance and window capturing support for use-cases around RDP/VNC remote desktop, screen sharing, and more. Merge Request: [Create ext-image-capture-source-v1 and ext-image-copy-capture-v1](https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/124)
fedilink

> Canonical’s announced a major shift in its kernel selection process for future Ubuntu releases. An “aggressive kernel version commitment policy” pivot will see it ship the latest upstream kernel code in development at the time of a new Ubuntu release. Original announcement: [Kernel Version Selection for Ubuntu Releases](https://www.omgubuntu.co.uk/2024/08/canonical-announce-major-ubuntu-kernel-change)
fedilink

> Aqua Nautilus researchers have identified a security issue that arises from the interaction between Ubuntu’s command-not-found package and the snap package repository. While command-not-found serves as a convenient tool for suggesting installations for uninstalled commands, it can be inadvertently manipulated by attackers through the snap repository, leading to deceptive recommendations of malicious packages.
fedilink

> Ubuntu Core Desktop will not be released alongside Ubuntu 24.04 LTS in April, as originally hoped. > Canonical doesn’t go into details about what specific issues need resolving. One imagines, given that the first Ubuntu Core Desktop release was going to be a preview and not a recommended download, it’s a myriad bugs/difficulties — ones not easily sorted.
fedilink

Damn Small Linux 2024
> The New DSL 2024 has been reborn as a compact Linux distribution tailored for low-spec x86 computers. It packs a lot of applications into a small package. All the applications are chosen for their functionality, small size, and low dependencies. DSL 2024 also has many text-based applications that make it handy to use in a term window or TTY. > The new goal of DSL is to pack as much usable desktop distribution into an image small enough to fit on a single CD, or a hard limit of 700MB. This project is meant to service older computers and have them continue to be useful far into the future. Such a notion sits well with my values. I think of this project as my way of keeping otherwise usable hardware out of landfills. > As with most things in the GNU/Linux community, this project continues to stand on the shoulders of giants. I am just one guy without a CS degree, so for now, this project is based on antiX 23 i386. AntiX is a fantastic distribution that I think shares much of the same spirit as the original DSL project. AntiX shares pedigree with MEPIS and also leans heavily on the geniuses at Debian. So, this project stands on the shoulders of giants. In other words, DSL 2024 is a humble little project!
fedilink

> Timothée Besset, a software engineer who works on the Steam client for Valve, took to Mastodon this week to reveal: “Valve is seeing an increasing number of bug reports for issues caused by Canonical’s repackaging of the Steam client through snap”. > “We are not involved with the snap repackaging. It has a lot of issues”, Besset adds, noting that “the best way to install Steam on Debian and derivative operating systems is to […] use the official .deb”. > Those who don’t want to use the official Deb package are instead asked to ‘consider the Flatpak version’ — though like Canonical’s Steam snap the Steam Flatpak is also unofficial, and no directly supported by Valve.
fedilink