• 19 Posts
  • 236 Comments
Joined 9M ago
cake
Cake day: Jun 02, 2023

help-circle
rss

Got a similar problem with ocz drives before they got acquired by Toshiba. Bought three, 100% failure rate just after warranty expiration


I also got the message via email

My account is wiped and I don’t login since ages


on that other website that i was using before lemmy, someone says that if you complain to support, they let you keep the $2 month subscription instead of $4. Although in those cases I would just cancel out of spite. They can’t increase the price by 100% with just 4 days notice!


RIP VMware.

Broadcom prefers to milk the top 500 customers with unreasonable fees rather than bother with the rest of the world. They know that nobody with a brain would intentionally start a new datacenter with VMware solutions


wow i wish in my country there was a company like this, with those prices


From the article it looks like zfs is the perfect file system for smr drives as it would try to cache random writes



Yes, I got lots of lag due to WordPress using all the CPU time to elaborate the same page over and over again.

I could have wasted some days to setup a cache proxy and other stuff but for a website with 10 monthly visitors is overkill, is faster to block everyone else outside the target. If someone is visiting from Russia or China they have 120% a malicious intent in my case, so no need to serve content


it’s not a distributed denial of service but a single bot asking the same fucking wordpress page every 100ms is still a denial of service on my poor home server. In one click i was able to ban the whole asian continent without too much effort


Tell this to the Russian bots that are hammering my personal site for some reason.

It’s way easier to make a rule “no Russia” or even “only my country”


You can easily set SSL with a self signed certificate, they get nothing


Simple reason: at home I don’t have a static IPv4 address and I can’t do port forwarding


It’s the list of IPs that belong to cloudflare.

I think that because I’m using tunnels it’s not necessary to have all of them, just the docker ip address space


How I accidentally slowed down my nextcloud instance for months
I am running this docker image: https://github.com/nextcloud/docker with a cloudflare tunnel, meaning the webserver would see all the traffic coming from a single ip in 172.16.0.0/12 . The documentation says: > The apache image will replace the remote addr (IP address visible to Nextcloud) with the IP address from X-Real-IP if the request is coming from a proxy in 10.0.0.0/8, 172.16.0.0/12 or 192.168.0.0/16 by default So I thought that this is a not a problem, as other docker images can also automagically figure out the real IP address from traffic coming from cloudflare tunnels. In the beginning it worked fine, then it was SLOW. Like 2 full minutes to load new feeds on news, waiting ages to complete a sync, and so on. I rebooted the server on those instances, and then it worked fine *for a day*. So because at the time i was running it on unraid, i blamed the lag on that OS + my weird array of HDDs with decades of usage on them. Migrated to debian on a nvme array and... same lag! Wasted hours trying to use caddy+fpm instead of apache and it's the same, worked fine for a day, then it was slow again. Then I wondered: what if the program is "smart" and throttles it by itself without any warning to the admin if it thinks that an ip address is sending too many requests? Modified the docker compose like this: ``` nextcloud: image: nextcloud ``` became ``` nextcloud: build: . ``` and I created a Dockerfile with ``` FROM nextcloud RUN apt update -y && apt upgrade -y RUN apt install -y libbz2-dev RUN docker-php-ext-install bz2 RUN a2enmod rewrite remoteip COPY remoteip.conf /etc/apache2/conf-enabled/remoteip.conf ``` with this as the content of remoteip.conf ``` RemoteIPHeader CF-Connecting-IP RemoteIPTrustedProxy 10.0.0.0/8 RemoteIPTrustedProxy 172.16.0.0/12 RemoteIPTrustedProxy 192.168.0.0/16 RemoteIPTrustedProxy 173.245.48.0/20 RemoteIPTrustedProxy 103.21.244.0/22 RemoteIPTrustedProxy 103.22.200.0/22 RemoteIPTrustedProxy 103.31.4.0/22 RemoteIPTrustedProxy 141.101.64.0/18 RemoteIPTrustedProxy 108.162.192.0/18 RemoteIPTrustedProxy 190.93.240.0/20 RemoteIPTrustedProxy 188.114.96.0/20 RemoteIPTrustedProxy 197.234.240.0/22 RemoteIPTrustedProxy 198.41.128.0/17 RemoteIPTrustedProxy 162.158.0.0/15 RemoteIPTrustedProxy 104.16.0.0/12 RemoteIPTrustedProxy 172.64.0.0/13 RemoteIPTrustedProxy 131.0.72.0/22 RemoteIPTrustedProxy 2400:cb00::/32 RemoteIPTrustedProxy 2606:4700::/32 RemoteIPTrustedProxy 2803:f800::/32 RemoteIPTrustedProxy 2405:b500::/32 RemoteIPTrustedProxy 2405:8100::/32 RemoteIPTrustedProxy 2a06:98c0::/29 RemoteIPTrustedProxy 2c0f:f248::/32 ``` and now because nextcloud is seeing all the different ip addresses it doesn't throttle the connections anymore!
fedilink

For the first the bandwidth is paid by Microsoft via GitHub, the second from linuxserver.io via donations.

That’s why I prefer the first one



Except a PC from 2008 can still run the latest windows 10 version without any real issues (an i7 940 from that time is roughly equivalent in perfomance to a brand new Celeron g5900) while a Mac from 2008 became e-waste years ago when Apple discontinued MacOS and workarounds become harder and harder with each yearly os update. And too many Mac developers target the latest MacOS versions so if you don’t update you can’t run latest apps.

“But you can run Linux on a Mac” yeah thanks to the proprietary shit, if I run Debian on my old iMac I get only a black screen (other distros are running ok though, not all of them are without issues)


I tried portainer and it was overkill for my usage, too much overhead and too many features that I don’t need.

Right now I’m using ajenti 2, which shows memory and CPU usage for the docker containers in the web page


The problem about the “automatically adjust resolution and bitrate” can be done in two ways:

  1. Using a GPU to transcode the 4k video in real time (generally unavailable on VPS)

  2. Encoding the video in multiple resolutions and bitrates, using much more disk space

Both solutions are expensive on a VPS.

In this case when I need to share stuff in 4k 60 (basically never) I just host on YouTube unlisted and having Google foot the bill. Maybe think like this: the content really deserves to be 4k 60 fps? Home videos that I share with my family are downgraded to 720p as anyway they will watch it horizontal on a vertical screen


150 apps that has been explicitly updated to support a device that’s so expensive that’s guaranteed that nobody would actually buy it is a lot. And it’s not even on sale yet!

For comparison look at the Microsoft hololens. Similar concept and similar price, announced 8 years ago, can only dream of having 150 useful apps. If i go on the hololens store page it says “Showing 1 - 90 of 321 items” and you can see that are mostly demos or proof of concepts.

8 years after the launch has just over double the apps for a device that will launch next month


The ad is really dystopian, the dad is ignoring the kid IRL and playing with memories of that kid


Accounting software needs to be updated yearly to the local tax codes or it becomes useless


I left the headline like the original, but I see this as a **massive win** for Apple. The device is ridiculously expensive, isn't even on sale yet and *already* has 150 apps specifically designed for that. If Google did this, it wouldn't even get 150 dedicated apps even years after launch (and the guaranteed demise of it) and even if it was something super cheap like being made of fucking cardboard. This is something that as an Android user I envy a lot from the Apple ecosystem. Apple: this is a new feature => devs implement them in their apps the very next day even if it launches officially in 6 months. Google: this is a new feature => devs ignore it, apps start to support it after 5-6 Android versions
fedilink

Yes, an ai assistant is that all it needs. And higher salaries to the c suite please. Also it needs to remove feature and ignore user requests


Urbackup. It does automatic backups of both files and whole disk image of all computers in the network and it’s set and forget. Once the server is set, you just need to install the client on every machine you need to backup, it will take all the settings automagically from the network.


I’m using urbackup server on Linux, but Winget has urbackup server so I guess it can also work there


i don’t want to type sudo before each single docker command


I fixed it:

for future reference:


checked .bash_history, looks like i installed docker in the new rootless mode

wget get.docker.com
ls
mv index.html docker.sh
chmod +x docker.sh
./docker.sh
dockerd-rootless-setuptool.sh install
sudo dockerd-rootless-setuptool.sh install
sudo apt install uidmap
dockerd-rootless-setuptool.sh install

now i need to see how to restore it to work in the traditional way or i will become crazy with the permissions…


Uid/gid in docker containers don’t match the uid/gid on the server?
Installed a new debian server, installed docker, but then now i have a problem with permissions on passed directories. On the previous server, the uid/gids inside the docker container match the uid/gid on the real server. Root is 0, www-data is 33, and so on. On this new server, instead, files owned by root (0) in the container are translated to 1000 on the server, www-data (33) is 100032, and so on (+1000 appended to the uid) Is this normal or did I misconfigure something? On the previous server I was running everything as root (the interactive user was root), and i would like to avoid that
fedilink

i think instead the opposite. The backend is the real interesting part, and the only way that we can be sure that “they cannot read the emails” (they arrive in clear, saved with reversible encryption and they have a key for it - if you use their services to commit crimes they will collaborate with the law enforcement agencies like everyone else)

imap/smtp can be toggled with a warning, if that’s really their concern. As of now i have the feeling that’s instead blocked to keep users inside (no IMAP = no easy migration to somewhere else) or to limit usage (no SMTP = no sending mass email)


i started with the mail basic (10 euro yearly for 10gb) but then because i switched from “secondary email that forwards to gmail” to “primary email that imports from gmail”, i had to move to the more expensive plan


For a spammer it literally takes less than ten seconds to clean a list of one million addresses from “plus addresses” and get back the original one without the source. Only amateur spammers use raw lists without any sanitization


I moved off to zoho

Much cheaper than proton and offers much more.

They’re not doing like proton and close basic stuff like IMAP and SMTP as a way to force you on the official apps

I especially love the feature where you can bounce emails based on domains, keywords or TLDs. My spam folder is finally empty. IMHO bounce back spam is much better, as the spammers get a response that the address is invalid and hopefully stop wasting their limited computing resources on that address.

Zoho is not open source, but proton is a “fake” open source that is mostly used for marketing: they opened only the UI, which communicates with a proprietary protocol to a proprietary server - useless. They also reject or ignore any pull request on GitHub.


I tried with a Celeron 1 GHz. It was slower than a rpi and it sucked 65 watts at idle 🙈

But at least can give some experience, I prefer playing the sysadmin with real hardware than a VM


Ciao sono italiano anche io, hai postato in una community relativa al self hosting (server, ecc), con maggioranza dei post in inglese (e hai anche settato la lingua del post in inglese, c’è una funzione di Lemmy che permette di postare in lingue diverse e nascondere lingue che non si parlano)

Secondo me sarebbe meglio riscrivere la presentazione qui => https://feddit.it/c/caffeitalia che così lo leggono in più italiani e hai risposte più consone 😊


Most “VPN” browser extensions (if not all of them) aren’t actually doing a VPN connection but just change the proxy setting in the browser. This is because as a browser extension they wouldn’t have enough permissions/power to establish a real VPN connection.

So if you want to use a browser extension you have to run a proxy server, or as other said, just use cloudflared as running a proxy server attracts bots from all over the world



Kdenlive is absolutely better than many other paid closed software video editors in all fields, speed, usability and ease of use

Got tricked into buying a license of pinnacle video editor 24 from humble bundle for $1 and I felt scammed. Slow, unresponsive, limited. Kdenlive beats it in every single compartment. Can clearly see that pinnacle video editor is just a cash cow for KKR and they’re just doing the bare minimum to maximize their revenue.

Last time I tried shotcut instead it felt incomplete


But that’s a shitty employer that is just making software for profit. Like Oracle or IBM. They have a niche of masochistic customers who enjoy being nickel and dimed at all times.

There are other examples, like indie games are closed source software that are written with no pressure to do x, y or Z and are driven by passion. Same for other programs, I can see there are many “quality first, money second” examples

And also in Foss there are highly opinionated software where the devs completely ignore users, ban them from GitHub when they post issues, or continuously change the APIs without a valid reason so your plugins need a constant rework and it’s a mess to stay behind


My lemmy instance is deindexed on bing/ddg probably due to a false claim by a copyright troll


they should change the DMCA that if a copyright troll purposely sends false/automated requests, forfeits their rights and can’t claim anything anymore for the next century. And google should automatically reject any request that contains any reputable website like bbc, microsoft or nasa. And hopefully flag the copyright troll so any future request has lower priority/automatically rejected


No, peertube is a YouTube alternative. Videos must be manually uploaded over a server which is them federated to other servers. Like Reddit vs Lemmy or Xitter vs mastodon


There’s a way to know how much i am spending/going to spend with Amazon S3?
I have several TB of borg backups. Uploaded them on backblaze b2. I could immediately see how much resources i was using, how many api calls, and so on. Very easy to see and predict the next bill. I can see exactly which bucket uses more resource, and which is growing over time. Because I'm cheap, I want to upload those files on aws glacier, which theoretically costs a quarter of b2 for storage, but API calls are extremely expensive. So I want to know the details. I won't like to get a bill with $5 in storage and $500 in API calls. Uploaded a backup, but nowhere in AWS I can see how much resources i am using, how much I'm going to pay, how many API calls, how much the user XYZ spent, and so on. It looks like it's **designed** for an approach like "just use our product freely, don't worry about pricing, it's a problem for the financial department of your company". In AWS console I found "s3 storage lens", but it says i need to delegate the access to someone else because reasons. Tried to create another user in my 1-user org, but after wasting 2 hours I wasn't able to find a way to add those permissions. Tried to create a dashboard in "AWS cost explorer" but all the indicators are null or zero. So, how can I see how many API calls and storage is used, to predict the final bill? Or the only way is to pray and wait the end of the month and hopefully there everything it's itemized in detail?
fedilink

I guess that means it's dead, as there's no way a corporation would pay millions to acquire a competitor just to continue developing a free alternative to their own product
fedilink

How I accidentally wiped my server by having a typo in my Nextcloud docker config
So, I moved my nextcloud directory from a local SATA drive to a NFS mount from a nvme array on a 10G network "I just need to change `/docker/nextcloud` to `/mnt/nfs/nextcloud` in the `docker-compose.yml`, what's the issue, i do it live" - i tell myself So i stop the container, copy `/docker/nextcloud` to `/mnt/nfs/nextcloud`, then edit the `docker-compose.yml`... and.... because I'm doing it during a phone call without paying too much attention i change the main directory to `/docker` I rebuild the container and I immediately hear a flood of telegram notifications from my uptime-kuma bot.... oh oh... Looks like the nextcloud docker image has an initialization script that if it doesn't find the files in the directory, it will delete everything and install a fresh copy of nextcloud... so it deleted **everything** on my server Luckily i had a very recent full borg backup and i'm restoring it (i kinda love-hate borg, i always forget the restore commands when in panic and the docs are a bit cryptic for me) Lessons learned: 1. always double check everything 2. offsite backups are a must (if i accidentally wrote `/` as path, i would have lost also the borg backups!) 3. offsite backups should not be permanently mounted, otherwise they would have been wiped as well 3. learn how to use and schedule filesystem snapshots, so the recovery wouldn't take ages like it's taking right now (2+ hours and i'm not even half way...)
fedilink

NFS or iSCSI?
So, i got persuaded to switch from a "server that is going to do everything" to "compute server + storage server" The two are connected via a DAC on an intel x520 network card. Compute is 10.0.0.1, Storage is 10.255.255.254 and i left the usable hosts in the middle for future expansion. Before I start to use it, I'm wondering if i chose the right protocols to share data between them. I set NFS and iSCSI. With iSCSI i create an image, share that image on the compute server, format it as btrfs, use it as a native drive. Files are not accessible anywhere else. With NFS i just mount the share and files can be accessed from another computer. Speed: I tried to time how long it takes to fill a dummy file with zeroes. ``` /iscsi# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" 250000+0 records in 250000+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.88393 s, 2.3 GB/s real 0m2.796s user 0m0.051s sys 0m0.915s ``` ``` /nfs# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" 250000+0 records in 250000+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 2.41414 s, 848 MB/s real 0m3.539s user 0m0.038s sys 0m1.453s ``` ``` /sata-smr-wd-green-drive-for-fun# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" 250000+0 records in 250000+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 10.1339 s, 202 MB/s real 0m46.885s user 0m0.132s sys 0m2.423s ``` what i see from this results: the sata slow drive goes at 1.6 gigabit/s but then for some reason the computer needs so much time to acknowledge the operation. nfs transferred it at 6.8 gigabit/s which is what i expected from a nvme array. Same command on the storage server gives similar speed. iscsi transfers at 18.4 gigabit/s which is not possible with my drives and the fiber connection. Probably is using some native file system trickery to detect "it's just a file full of zeroes, just tell the user it's done" The biggest advantage of NFS is that I can share a whole directory and get direct access. Also sharing another disk image via iscsi requires a service restart which means i have to take down the compute server. But with iscsi i am the owner of the disk so i can do whatever i want, don't need to worry about permissions, i am root, chown all the stuff So... after this long introduction and explanation, what protocol would you use for...: * /var/lib/mysql - a database. Inside a disk image shared via iscsi or via nfs? * virtual machine images. Copy them inside another image that's then shared via iscsi? Maybe nfs is much better for this case. Otherwise with iscsi i would have a single giant disk image that contains other disk images... * lots of small files like WordPress. Maybe nfs would add too much overhead? But it would be much easier to backup if it was an NFS share instead of a disk image
fedilink

New California law limits cash to crypto at ATM machines at $1000 per day per person and also the fees that can be imposed by the machines. The industry says this will hurt the business, hinting that they're profiting from the lack of KYC policies I don't see any legitimate use from those machines. Who would have a legit need to exchange $15k from cash to crypto at 33% fees????
fedilink

Sorry, another news from this asshole, but this is too much assholery to don't be shared Despite him being a shitty boss that fired employees that criticized him on twitter, he promised an "unlimited" legal defense fund to fight against employers that fired employees because of something they wrote on Twitter. Under his tweet a lot of "verified" (=right wing) accounts plauded this and asked to fight employers who fired employees for having written something homophobic
fedilink

I tried to delete and recreate the container, but there's still this insane power consumption. For comparison, cloudflared doing a tunnel to hundreds of users takes 0.06% of CPU time on the same server
fedilink


Anyone using “docker run” instead of “docker compose”?
For the vast majority of docker images, the documentation only mention a super long and hard to understand "docker run" one liner. Why nobody is placing an example docker-compose.yml in their documentation? It's so tidy and easy to understand, also much easier to run in the future, just set and forget. If every image had an yml to just copy, I could get it running in a few seconds, instead I have to decode the line to become an yml I want to know if it's just me that I'm out of touch and should use "docker run" or it's just that an "one liner" looks much tidier in the docs. Like to say "hey just copy and paste this line to run the container. You don't understand what it does? Who cares" The worst are the ones that are piping directly from curl to "sudo bash"...
fedilink

Is outline VPN still a good option to host?
In a few months I need to go in an authoritarian country for work and I need a VPN. Last time (2018) the only one it worked reliably was outline based on a self hosted VM. But went back to see and it looks like abandoned. Link rot in home page if I select my language, no docker install in 2023 and the fact that was developed by Google is a big negative sign (not because of privacy issues, but because they get bored with their project, they abandon them and start an identical but incompatible one) There are better alternatives in 2023? I need to have as most chances to connect to the uncensored internet as otherwise I can't work and my 2 weeks work trip becomes a 2 weeks vacation where I can't even check my email or work chat
fedilink

Could my simple python program softbrick an IMAP server?
I bought a domain on OVH and since I expected little use of email on that domain, I made do with the "included" email. The one that uses roundcube 1.4 (from the last century) as webmail. One day I get an email: >Votre boite mail a atteint 90% de l'espace maximum autorisé sur le serveur. (your mailbox is full) "It must be a bug, I have 5gb of space and I have only a dozen mails in the box" - I say to myself But instead all the mails sent to this mailbox bounce with the error "mailbox full". Since the mails are of little importance, I delete everything and empty the trash, now it is empty, but incoming mails are rejected with "mailbox full" I open a ticket, they don't believe it at first, but after sending screenshots of both the roundcube quota and their panel showing that the usage is zero, they restore my access. After a couple days I again get the usual email in French "your box is full". I reopen the ticket and they tell me that everything is normal and that it is the fault of my email client. Now that i think about, I did some experiments with python to see if I could see the mails on the server, but I put read-only access, I don't think it's my fault... Here is the code: ``` import imaplib import email from datetime import datetime, timedelta from sendmail import SendMail from connection import Connection class GetMail: def __init__(self, connection): self.connection = connection def fetch_emails(self): # Connessione all'account IMAP imap = imaplib.IMAP4_SSL(self.connection.imap_server) imap.login(self.connection.username, self.connection.password) imap.select('INBOX', readonly=True) imap.search(None, 'ALL') email_ids = data[0].split() email_ids = email_ids[:quantemail] for email_id in email_ids: status, data = imap.fetch(email_id, '(BODY[HEADER.FIELDS (FROM SUBJECT)])') raw_email = data[0][1] print('Raw Email:\n', raw_email.decode('utf-8'), '\n') if email_ids: try: email_index = int(input("Inserisci l'indice dell'email da scaricare: ")) except: email_index = -1 if 1 <= email_index <= len(email_ids): email_id = email_ids[email_index - 1] status, data = imap.fetch(email_id, '(RFC822)') raw_email = data[0][1] email_message = email.message_from_bytes(raw_email) eml_filename = f"email_temp.eml" with open(eml_filename, 'wb') as eml_file: eml_file.write(raw_email) print("Email scaricata:", eml_filename) else: print("Indice email non valido.") else: print("Nessuna email trovata.") imap.logout() ``` It seems strange to me that if I log in one day with this program, then a few days later the box shows as full even though it is not, and deleting messages does not restore access. Maybe if you log in with "readonly=true", then the server has a bug where it puts the whole box read-only and can no longer receive mail?
fedilink

Something to search files in my LAN?
At the moment i use a super hacky and unstable setup where multiple instances of [everything](https://www.voidtools.com/) in ETP server mode run under a windows core VM, and *when it works*, it works fine. But sometimes it stops updating the index, sometimes it just crashes, sometimes serves the wrong db, and so on. So: there's a better way? Maybe a web app? Possibly with multiple users, some directories are only for me and not for the rest of the family. The solution can be also paid (but with a reasonable price as this is for personal use and i can't justify the $$$ for an enterprise solution)
fedilink

Switching from Apache to Caddy: how to use php-fpm?
Hi, some bot for some reason really needs to scrape my wordpress blog over and over again, overheating my poor celeron. For laziness I am just using the default wordpress docker image, which is php+apache. I did some experiments and with php-fpm+caddy it's much faster. Now i want to migrate all my wordpress blogs, five in total and I want to manage them from a single caddy instance. php-fpm needs to be mounted in the same path or it can be different? For example, the wordpress install places the files in /var/www/html, and so for Caddy I mount in the same path and in the Caddyfile i have: ``` test.example.com{ root * /var/www/html php_fastcgi wordpress:9000 file_server } ``` if i have multiple installs can I mount different paths like /var/www/html2 and so on (but only on caddy) or it must match both containers?
fedilink

Imho a decision that doesn't make sense, Google already has paid for all the infrastructure so running the service to them is pure profit, not to mention the lost possibility to upsell Google workspace (paid gmail with a custom domain) or Google cloud services to all their customers. Although I would never subscribe to something from Google for my business, then when they eventually change mind and kill my product, i have to migrate in an hurry. I've been hurted too many times
fedilink

Finally after so many years the "never combine labels" setting is back and i can uninstall explorerpatcher
fedilink