cultural reviewer and dabbler in stylistic premonitions

  • 3 Posts
  • 54 Comments
Joined 3Y ago
cake
Cake day: Jan 17, 2022

help-circle
rss

xzbot from Anthony Weems enables to patch the corrupted liblzma to change the private key used to compare it to the signed ssh certificate, so adding this to your instructions might enable me to demonstrate sshing into the VM :)

Fun :)

Btw, instead of installing individual vulnerable debs as those kali instructions I linked to earlier suggest, you could also point debootstrap at the snapshot service so that you get a complete system with everything as it would’ve been in late March and then run that in a VM… or in a container. You can find various instructions for creating containers and VMs using debootstrap (eg, this one which tells you how to run a container with systemd-nspawn; but you could also do it with podman or docker or lxc). When the instructions tell you to run debootstrap, you just want to specify a snapshot URL like https://snapshot.debian.org/archive/debian/20240325T212344Z/ in place of the usual Debian repository url (typically https://deb.debian.org/debian/).


A daily ISO of Debian testing or Ubuntu 24.04 (noble) beta from prior to the first week of April would be easiest, but those aren’t archived anywhere that I know of. It didn’t make it in to any stable releases of any Debian-based distros.

But even when you have a vulnerable system running sshd in a vulnerable configuration, you can’t fully demo the backdoor because it requires the attacker to authenticate with their private key (which has not been revealed).

But, if you just want to run it and observe the sshd slowness that caused the backdoor to be discovered, here are instructions for installing the vulnerable liblzma deb from snapshot.debian.org.


You can use Wireshark to see the packets and their IP addresses.

https://www.wireshark.org/download.html

https://www.wireshark.org/docs/

A word of warning though: finding out about all the network traffic that modern software sends can be deleterious to mental health 😬


I do have wireguard on my server as well, I guess it’s similar to what tailscale does?

Tailscale uses wireguard but adds a coordination server to manage peers and facilitate NAT traversal (directly when possible, and via a intermediary server when it isn’t).

If your NAT gateway isn’t rewriting source port numbers it is sometimes possible to make wireguard punch through NAT on its own if both peers configure endpoints for eachother and turn on keepalives.

Do you know if Yggdrasil does something similar and if we exchange data directly when playing over Yggdrasil virtual IPv6 network?

From this FAQ it sounds like yggdrasil does not attempt to do any kind of NAT traversal so two hosts can only be peers if at least one of them has an open port. I don’t know much about yggdrasil but from this FAQ answer it sounds like it runs over TCP (so using TCP applications means two layers of TCP) which is not going to be conducive to a good gaming experience.

Samy Kamkar’s amazing pwnat tool might be of interest to you.


I have a device without public IP, AFAIK behind NAT, and a server. If I use bore to open a port through my server and host a game, and my friends connect to me via IP, will we have big ping (as in, do packets travel to the server first, then to me) or low ping (as in, do packets travel straight to me)?

No, you will have “big ping”. bore (and everything on that page i linked) is strictly for tunneling which means all packets are going through the tunnel server.

Instead of tunneling, you can try various forms of hole punching for NAT traversal which, depending on the NAT implementation, will work sometimes to have a direct connection between users. You can use something like tailscale (and if you want to run your own server, headscale) which will try its best to punch a hole for a p2p connection and will only fall back to relaying through a server if absolutely necessary.


See https://github.com/anderspitman/awesome-tunneling for a list of many similar things. A few of them automatically setup letsencrypt certs for unique subdomains so you can have end-to-end HTTPS.


Mattermost isn’t e2ee, but if the server is run by someone competent and they’re allowed to see everything anyway (eg it’s all group chat, and they’re in all the groups) then e2ee isn’t as important as it would be otherwise as it is only protecting against the server being compromised (a scenario which, if you’re using web-based solutions which do have e2ee, also leads to circumvention of it).

If you’re OK with not having e2ee, I would recommend Zulip over Mattermost. Mattermost is nice too though.

edit: oops, i see you also want DMs… Mattermost and Zulip both have them, but without e2ee. 😢

I could write a book about problems with Matrix, but if you want something relatively easy and full featured with (optional, and non-forward-secret) e2ee then it is probably your best bet today.


Tuta is most likely a honeypot, and in any case it is pseudo-open source so it’s offtopic in this community.



Ok, you and @d3Xt3r@lemmy.nz are both mods of /c/linux@lemmy.ml now. Thanks!


Not necessarily true - that right to modify/redistribute depends on the exact license being applied.

If you don’t have the right to modify and redistribute it (and to do so commercially) then it does not meet the definitions of free software or open source.

For example, the Open Watcom Public License claims to be an “open source” license, but it actually doesn’t allow making modifications.

The Sybase Open Watcom Public License does allow making modifications, and distributing modified versions. The reason why the FSF has not approved it is that it requires you to publish source code even if you only wanted to run your modified version yourself and didn’t actually want to distribute anything to anyone. (The Watcom license is one of the few licenses which is approved by OSI but not FSF. You can see the other licenses which are approved by one but not the other by sorting this table.)

The FSF’s own AGPL license is somewhat similar, but it only imposes the requirement if you run the software for someone else over a network. (Neither of these requirements are likely to be enforceable by copyright law, as I explained in my comment about the AGPL in the thread which this thread is about…)

This is also why we specifically have the terms “free software” or “FOSS” which imply they you are indeed allowed to modify and redistribute.

I would recommend reading this: https://www.gnu.org/philosophy/open-source-misses-the-point.en.html

I would recommend that you re-read that, because it actually explains that the two terms refer to essentially the same category of software licenses (while it advocates for using the term free software to emphasize the philosophical aspects of those licenses).


Opensource just means that the source code is available, FOSS however implies that you’re free to modify and redistribute the program

Incorrect. “Open Source” also means that you are free to modify and redistribute the software.

If the source code is merely available but not free to modify and/or redistribute, then it is called source-available software.


Ok, I just stickied this post here, but I am not going to manage making a new one each week :)

I am an admin at lemmy.ml and was actually only added as a mod to this community so that my deletions would federate (because there was a bug where non-mod admin deletions weren’t federating a while ago). The other mods here are mostly inactive and most of the mod activity is by me and other admins.

Skimming your history here, you seem alright; would you like to be a mod of /c/linux@lemmy.ml ?


Thanks. They are no longer a mod of this community. (I wrote this comment to them and they did not reply.)


As of today, NixOS (like most distros) has reverted to a version slightly prior to the release with the Debian-or-Redhat-specific sshd backdoor which was inserted into xz just two months ago. However, the saboteur had hundreds of commits prior to the insertion of that backdoor, and it is very likely that some of those contain subtle intentional vulnerabilities (aka “bugdoors”) which have not yet been discovered.

As (retired) Debian developer Joey Hess explains here, the safest course is probably to switch to something based on the last version (5.3.1) released prior to Jia Tan getting push access.

Unfortunately, as explained in this debian issue, that is not entirely trivial because dependents of many recent pre-backdoor potentially-sabotaged versions require symbol(s) which are not present in older versions and also because those older versions contain at least two known vulnerabilities which were fixed during the multi-year period where the saboteur was contributing.

After reading Xz format inadequate for long-term archiving (first published eight years ago…) I’m convinced that migrating the many projects which use XZ today (including DPKG, RPM, and Linux itself) to an entirely different compression format is probably the best long-term plan. (Though we’ll always still need tools to read XZ archives for historical purposes…)



Hi @haui_lemmy@lemmy.giftedmc.com,

fyi icymi due to this thread someone posted this other thread asking “Is it appropriate for someone to be a mod here when they don’t understand open source, and insult users in the community?”.

I don’t have time to read all ~200 comments in these two threads, but I do think that being a moderator of /c/opensource@lemmy.ml requires knowing what FOSS is to be able to remove posts promoting things which are not.

Hopefully the replies here (again, I have not read even half of this thread…) have made you better informed?

In case you haven’t yet, I would highly recommend that you read these two documents (you can start with their wikipedia articles and follow links from there to the actual documents):

In short, the answer to your question (“Is there a License that requires the user to donate if they make revenue?”) is yes, there are many such licenses, but they are definitively not FOSS licenses (despite what some people who haven’t read the above definitions might try to tell you).

I won’t enumerate any of the non-FOSS licenses which attempt such a thing, because I recommend against the use of such licenses or software licensed under them.

BTW, I saw you wrote in another comment:

By now I get that FOSS mostly implies free work for corporations. I‘ll just go with agpl to ensure they get nothing from my work.

While corporations benefiting from FOSS while failing to financially support it at all is extremely commonplace, I vehemently disagree that that is what FOSS “mostly implies”. In fact, the opposite is more common: the vast majority of free software users are not paying anything to the companies who have paid for an enormous amount of the development of it. A few hundred companies pay tens of thousands of individual developers to develop and maintain the Linux kernel, for instance.

Regarding the second sentence of yours that I quoted above, in case you haven’t understood this yet: the AGPL does not prevent commercial use of your work. If you write a web app and license it AGPL, you are giving me permission to run it, modify it, redistribute my modified version, and to charge money for it without giving you anything.

What the AGPL does, and why many companies avoid it, is impose the requirement that I (the recipient of your software) offer the source code to your software (and any modifications I made to it) under that same license not only to anyone I distribute it to but also to anyone using the software over a network on my server.

If the software were licensed GPL instead of AGPL, I would only be required to offer GPL-licensed source code to people when I distribute the software to them. Eg, I could improve a GPL web app and it is legal to not share my improvements (to the server-side code) with anyone at all because the software is not being distributed - it is just running on my server.

By imposing requirements about how you run the software (eg, if you put an AGPL notice in the UI, I am not allowed to remove it) the AGPL is more than just a copyright license: violations of the GPL and most FOSS licenses are strictly copyright violations and can be enforced as such, but violating the part of the AGPL where it differs from the GPL would not constitute copyright infringement because no copying is taking place. Unlike almost every other FOSS license, the AGPL is both a copyright license and a end-user license agreement.

For this reason, many people have misgivings about the AGPL. However, if you want to scare companies away from using your software at all (and/or require them to purchase a different license from you to use it under non-AGPL terms, which is only possible if you require all contributors to assign copyright or otherwise give you permission to dual-license their work) while still using a license which the FOSS community generally accepts as FOSS… AGPL is probably your best bet.

HTH.

p.s. I’m not a lawyer, this isn’t legal advice, etc etc :)


+1 to ctrl-alt-fsomething (start at f1 and go up to move through the different virtual terminals). once in a while there are graphics problems which this will fix.

If you’re using GNOME Shell on X you can reload the shell (and all of its extensions) with alt-f2 and then in the “Run a command” dialog that appears type r and hit enter. Unfortunately this doesn’t work in GNOME on Wayland.


So it will be only Systemd

what? no. did you read the linked post? Some desktop environments will have more functionality and work better if you do use it, but (for now, at least) you can still run even GNOME under OpenRC if you want.





They’d need to implement something like SRP.

Update: I contacted the developers to bring my comment to their attention and it turns out they have already implemented SRP to address this problem (but they haven’t updated their architecture document about it yet).


It is, but in this case I think it isn’t actually a weakness for the reasons I explained.


That’s complicated to do correctly. Normally, for the server to verify the user has the correct password, it needs to know or receive the password, at which point it could decrypt all the user’s files. They’d need to implement something like SRP.

What I proposed is that the server does not know the password (of course), but that it knows a thing derived from it (lets call it the loginSecret) which the client can send to obtain the encryptedMasterKey. This can be derived in a similar fashion to the keyEncryptionKey (eg, they could be different outputs of an HKDF). The downside to the server knowing something derived from the passphrase is that it enables the server to do an offline brute force of it, but in any system like this where the server is storing something encrypted using [something derived from] the passphrase the server already has that ability.

Is there any downside to what I suggested, vs the current design?

And is there some reason I’m missing which would justify adding the complexity of SRP, vs what I proposed above?

The only reason I can think of would be to protect against a scenario where an attacker has somehow obtained the user’s loginSecret from the server but has not obtained their encryptedMasterKey: in that case they could use it to request the encryptedMasterKey, and then could make offline guesses at the passphrase using that. But, they could also just use the loginSecret for their offline brute-force. And, using SRP, the server still must also store something the user has derived from the password (which is equivalent to the loginSecret in my simpler scheme) and obtaining that thing still gives the adversary an offline brute-force opportunity. So, I don’t think SRP provides any benefit here.


edit: the two issues i raised in this comment had both already been addressed.

this was the developer’s reply on matrix:

  1. We do have a CLA: https://cla-assistant.io/ente-io/ente
  2. We will update the iOS app to offer you an option to point to your self hosted instance (so that you can save yourself the trouble of building it): https://github.com/ente-io/ente/discussions/504
  3. The portion of the document that deals with authentication has been outdated, my bad. We’ve adopted SRP to fix the concerns that were pointed out: https://ente.io/blog/ente-adopts-secure-remote-passwords/
here is my original comment

AGPL-3.0

Nice

This would be nice, but, this repo includes an iOS app, and AGPL3 binaries cannot be distributed via Apple’s App Store!

AGPL3 (without a special exception for Apple, like NextCloud’s iOS app has) is incompatible with iOS due to the four paragraphs of the license which mention “Installation Information” (known as the anti-tivoization clause).

Only the copyright holder(s) are able to grant Apple permission to distribute binaries of AGPL3-licensed software to iOS users under non-AGPL3 terms.

Every seemingly-(A)GPL3 app on Apple’s App Store has either copyright assignment so that a single entity has the sole right to distribute binaries in the App Store (eg, Signal messenger) or uses a modified license to carve out an Apple-specific exception to the anti-tivoization clause (eg, NextCloud). In my opinion, the first approach is faux free software, because anyone forking the software is not allowed to distribute it via the channel where the vast majority of users get their apps. (In either case, users aren’t allowed to run their own modified versions themselves without agreeing to additional terms from Apple, which is part of what the anti-tivoization clause is meant to prevent.)

Only really nice when not CLA is required and every contributor retains their copyright. Ente doesn’t seem to require a CLA.

I definitely agree here! But if it’s true that they’re accepting contributions without a CLA, and they haven’t added any iOS exception to their AGPL3 license, then they themselves would not be allowed to ship their own iOS app with 3rd party contributions to it! 😱 edit: it’s possible this is the case and Apple just hasn’t noticed yet, but that is not a sustainable situation if so.

If anyone reading this uses this software, especially on iOS, I highly recommend that you send the developers a link to this comment and encourage them to (after getting the consent of all copyright holders) add something akin to NextCloud’s COPYING.iOS to their repository ASAP.

cc @ioslife@lemmy.ml @baduhai@sopuli.xyz @skariko@feddit.it

(i’m not a lawyer, this is not legal advice, lol)

edit: in case a dev actually sees this… skimming your architecture document it looks like when a user’s email is compromised (“after you successfully verify your email”), the attacker is given the encryptedMasterKey (encrypted with keyEncryptionKey, which is derived from a passphrase) which lets them perform an offline brute-force attack on the passphrase. Wouldn’t it make more sense to require the user to demonstrate knowledge of their passphrase to the server prior to giving them the encryptedMasterKey? For instance, when deriving keyEncryptionKey, you could also derive another value which is stored on the server and which the client must present prior to receiving their encryptedMasterKey. The server has the opportunity to do offline attacks on the passphrase either way, so it seems like there wouldn’t be a downside to this change. tldr: you shouldn’t let adversaries who have compromised a user’s email account have the ability to attack the passphrase offline.

(i’m not a cryptographer, but this is cryptography advice)


The only thing I want that I don’t have right now is horizontal monitor splits for vertical monitors.

You can do that with this shell extension (which is the upstream of Ubuntu’s “gnome-shell-extension-tiling-assistant” package, which on Ubuntu is installed by default and called “Ubuntu Tiling Assistant” in the GNOME Extension manager).


Can containers boot on their own? Then they are hosts, if not they are guests.

It depends what you mean by “boot”. Linux containers are by definition not running their own kernel, so Linux is never booting. They typically (though not always) have their own namespace for process IDs (among other things) and in some cases process ID 1 inside the container is actually another systemd (or another init system).

However, more often PID 1 is actually just the application being run in the container. In either case, people do sometimes refer to starting a container as “booting” it; I think this makes the most sense when PID 1 in the container is systemd as the word “boot” has more relevance in that scenario. However, even in that case, nobody (or at least almost nobody I’ve ever seen) calls containers “guests”.

As to calling containers “hosts”, I’d say it depends on if the container is in its own network namespace. For example, if you run podman run --rm -it --network host debian:bookworm bash you will have a container that is in the same network namespace as your host system, and it will thus have the same hostname. But if you omit --network host from that command then it will be in its own network namespace, with a different IP address, behind NAT, and it will have a randomly generated hostname. I think it makes sense to refer to the latter kind of container as a separate host in some contexts.


You main OS is called the host and the container is called the guest

The word “guest” is generally used for virtual machines, not containers.




How is Ubuntu Touch in 2024?

In 2023, after the 2022 Ubuntu LTS was released, Ubuntu Touch finally upgraded from the 2016 LTS… to the 2020 LTS. Which they’re still on today.

They’ve announced they’re finally going to get their shit together and be based on recent releases of things after Ubuntu 24.04 is released later this year. Personally, I wouldn’t recommend trying it before they do that.

https://postmarketos.org/ is pretty great today.


I considered putting logos of some of the many more user-friendly pre-ubuntu distros in the meme but was lazy.

Debian was intended to be for regular desktop users back then too, though.


What Linux distribution came before Ubuntu that was specifically designed to be user friendly for a non-technical user?

There were a bunch of distros advertising ease of use; several were even sold in physical boxes (which was the style at the time) and marketed to consumers at retail stores like BestBuy years before Ubuntu started.

Here are four pictures of the physical packaging for three of those pre-ubuntu desktop distros designed to be user friendly and marketed to the general public:

Photo of the cardboard packaging for Caldera OpenLinux Another Caldera box Packaging of SuSE 8.1 Mandrake 7.2 packaging

Ubuntu was better than what came before it in many ways, and it deserves credit for advancing desktop Linux adoption both then and now, but it was not “one of the first” by any stretch.



there were dozens of others in the 11 years between the first and ubuntu





There is a version of VLC for the Nvidia Shield, but it has a somewhat irritating UI and I don’t know if it can actually read the menus like the desktop version can.


I don’t know about the other two mods here but I heard @AgreeableLandscape@lemmy.ml plans to return from hiatus eventually.

I’ve done most most of the mod actions here in the last year, first as an admin but eventually I was added as a mod in this community too because there was a bug (fixed in 0.19) which prevented admins’ mod actions from federating (and there were some egregious posts which kept getting remote reports).

Thanks for the offer of help @beta_tester@lemmy.ml but I think the other admins and I (who are all longtime Linux users) are doing OK moderating this community. Also I see that yesterday you re-posted something immediately after it was deleted, with a title referencing its deletion 😦

If you see something that should be deleted, please do flag it, and if you’re unhappy with mod actions you can always message a mod or ask about it in /c/meta@lemmy.ml