• 20 Posts
  • 320 Comments
Joined 1Y ago
cake
Cake day: Jun 02, 2023

help-circle
rss

If you make a backup with a tool like Borg that creates encrypted archives, then using AWS S3 glacier is the cheapest.

What’s bad about it: if you ever need those files again, it’s going to be VERY expensive to download them again, so it has to be treated as the “what if a nuke hits my city and all the local and off-site backups are vaporized” solution

Also: it’s not recommended to directly host plain files, they need to be in an archive format with big chunks, as the API calls that are used to list them during sync are counted in a very expensive way


Ah maybe I was missing the ./ , it said garage not found on path (on mobile, can’t try)


I set garage via docker and it was not impossibly hard.

Main problem is that there isn’t an admin panel and you can’t login to the docker container via docker exec, so you have to write some python (or other language of your choice) to send requests to the API port to:

  1. Set the layout of your server
  2. Create an user
  3. Create a bucket
  4. Assign that bucket to your user

No matter on which country the iot device it’s made, giving internet access to them in military bases is madness. IOT must be on a separate VLAN without any internet access. No exceptions, they’re usually running buggy firmware based on ancient Linux versions and no updates are ever released or installed. They’re exploitable time bombs


With paid certificates you can target ancient and unsupported operating systems like windows XP and android 2, letsencrypt is relatively recent and it’s not present in the root certificates of those systems



Technically, if it wasn’t for the unofficial server component, you had to pay for a subscription even if you self host


I made something crude with python and flask, but it’s only to print address labels, always the same settings (paper size and so on)

So i just put a textbox, press the button and it prints there.

When printing generic stuff, you would need to set paper type, paper size, color or BW, if have both sides printed, if printing from a specific tray, then some kind of user authentication (i am lazy and i didn’t care about privacy so i used cloudflare access), so the complexity becomes much bigger.

Before making my crude script I searched long time for a free or cheap solution, but I didn’t find. If you find, let me know


It didn’t appear for me until I reinstalled from play store

Didn’t investigate more


big disadvantage: if installed via fdroid, it won’t work on android auto


I’m really curious about what kind of BS. Like you could search a sexy shop and have directions to go in person?



On my m920q I used a random RAM stick and works fine. I would have returned it if it worked only with specific RAM sticks, even Apple didn’t do that



Makes sense, the site is dying and the Mac app was dead even before musk.

Maintaining the iPad app on MacOS would have required to hire a part time intern and there’s no money for that


Oops. As a non-native English speaker I misunderstood what he meant. I understood wrongly that he set the server to ban everything that asked for robots.txt


only if you don’t want any visits except from yourself, because this removes your site from any search engine

should write a “disallow: /juicy-content” and then block anything that tries to access that page (only bad bots would follow that path)


for common people they respect and even warn a webmaster if they submit a sitemap that has paths included in robots.txt


A search engine can’t pay a website for having the honor of bringing them visits and ad views.

Fuck reddit, get delisted, no problem.

Weird that google is ignoring their robots.txt though.

Even if they pay them for being able to say that glue is perfect on pizza, having

User-agent: *
Disallow: /

should block googlebot too. That means google programmed an exception on googlebot to ignore robots.txt on that domain and that shouldn’t be done. What’s the purpose of that file then?

Because robots.txt is completely based on honor (there’s no need to pretend being another bot, could just ignore it), should be

User-agent: Googlebot
Disallow:
User-agent: *
Disallow: /

The cloudflare tunnel is effectively a local reverse proxy

Create a docker network, place everything on the same docker network, then you can reach stuff by setting the tunnel at http://[container-name]

So you set the tunnel at http://nextcloud or http://jellyfin:8096 and so on

You’d think “but without a local proxy that does ssl encryption, cloudflare could read my communication” - no, if they really wanted they could read it anyway as they decrypt and reencrypt


I like munin, it’s very limited, a bit hard to configure and doesn’t have many features but uses almost no resources


Next truenas version replaces kubernetes with docker compose - you could try a nightly to see if that works for you


I would have assumed that maps are disabled by default and all requests proxied by the server to some mapbox api that has been set by the admin


It doesn’t have an option to split it?

When I did my Google takeout to delete all my pics from Google photos there was an option to split in like “one zip every 2gb”


Minio is definitely not designed to be self hosted on a small server by normal people but more for enterprise use where you have multiple servers and you’re paying hundreds of thousands of dollars for support



when i had only the file server, i turned on via WOL each time i actually needed it and a script shut it down if there was no activity after 11pm

now i host so much stuff and i’m so dependant on it that it requires redundant power and failover WAN via 5g…


I had one of those NAS (NSA320). Even when they were new and suppoted they were using some ancient custom version of linux with ancient packages. It would be insane to expose them on the internet.




If you want to just use it exclusively as a Nas, then why not truenas?

I have a unRAID server but the nas part is nowhere as good as truenas (slower, worse ad integration)

Main issue with virtualization is the bootable USB with the serial number that’s used as DRM


0.69? Here they take 0.89 which is a nice 1000% markup over the electricity price. And the shame is that everyone copied the insane pricing, still blaming “sorry the war in Ukraine forces us to double the prices” even if they now came back to what they were in the beginning of 2021


Looks like the team knew what was going on, three weeks ago the main European competitor hired this Tesla supercharger manager as CEO https://newmobility.news/2024/04/18/ionity-snaps-up-tesla-supercharger-europe-boss-as-new-ceo/


With this move Tesla effectively gave up on the $7.5 billion package from the us government to build new infrastructure. Since it was paid with private money, it doesn’t require to be nationalized. It’s also accessible to anyone, with prices that are reasonable

Also, I don’t know how efficient is the government in the usa, but in my country the chargers built by the semi-nationalized electric company are almost always broken because they don’t really care about profits, they have the unlimited government budget, so what’s the issue if a charger breaks and gets fixed after 2 years, making zero revenue during that time?


Any H1B visa hostage can pick up someone else’s work on xitter for slave hours, but for cars you need experience, talent and know how. If you fire a whole team to save $x, then you gotta pay $5x to rebuild that team when you eventually need it. (Tesla probably needs a team to develop new models, eventually. Same for the policy team, useful to launch the auto taxi when it exits from beta in 2045


not only the board won’t fire him, but probably would even vote towards approving that bonus he’s craving



intel gpu = any integrated graphics from any intel cpu made in the last 8 years. This includes those crusty $10 celerons, don’t need a dedicated intel arc gpu (unless you’re streaming to dozens users at the same time)

detail of supported formats https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video


nvme for videos seems expensive for nothing, unless you are serving 4k videos over a 10gbit connection to multiple users


Owncloud infinite scale is a rewrite of owncloud(=nextcloud) in go, it supports local, nfs and S3 mounts. Change the smb share to nfs and it might fit you

Disadvantages are:

  1. All the plugins need to be rewritten, so if you need some extra feature, it’s going to be missing
  2. They got acquired by a company that sells an expensive alternative for corporations (RIP? Who is paying millions to maintain a free alternative/competitor?)
  3. Documentation is inferior, community is much smaller

I decided that I will update the nextcloud (windows) desktop client once or twice a decade
I've enough. Last year the automatic updater was rebooting windows without any warning after the uac prompt. The problem continued for **months** before being fixed This year I got an update a week. Very annoying to get the same "why u no reboot? I need updates" question every single time I turn on my PC. Today when updating it kills explorer.exe without any confirmation and doesn't bring it back to life. I don't think that their paid enterprise customers are doing the ~~beta~~ alpha testers like this. Is it really necessary to push nightlies to end users? It can't be tested casually for a couple of days then pushed? I disabled the updates check and will update the nextcloud desktop client manually every 5 years if I can remember. Added an exception to Winget so it doesn't update it. I lost my patience.
fedilink

How I accidentally slowed down my nextcloud instance for months
I am running this docker image: https://github.com/nextcloud/docker with a cloudflare tunnel, meaning the webserver would see all the traffic coming from a single ip in 172.16.0.0/12 . The documentation says: > The apache image will replace the remote addr (IP address visible to Nextcloud) with the IP address from X-Real-IP if the request is coming from a proxy in 10.0.0.0/8, 172.16.0.0/12 or 192.168.0.0/16 by default So I thought that this is a not a problem, as other docker images can also automagically figure out the real IP address from traffic coming from cloudflare tunnels. In the beginning it worked fine, then it was SLOW. Like 2 full minutes to load new feeds on news, waiting ages to complete a sync, and so on. I rebooted the server on those instances, and then it worked fine *for a day*. So because at the time i was running it on unraid, i blamed the lag on that OS + my weird array of HDDs with decades of usage on them. Migrated to debian on a nvme array and... same lag! Wasted hours trying to use caddy+fpm instead of apache and it's the same, worked fine for a day, then it was slow again. Then I wondered: what if the program is "smart" and throttles it by itself without any warning to the admin if it thinks that an ip address is sending too many requests? Modified the docker compose like this: ``` nextcloud: image: nextcloud ``` became ``` nextcloud: build: . ``` and I created a Dockerfile with ``` FROM nextcloud RUN apt update -y && apt upgrade -y RUN apt install -y libbz2-dev RUN docker-php-ext-install bz2 RUN a2enmod rewrite remoteip COPY remoteip.conf /etc/apache2/conf-enabled/remoteip.conf ``` with this as the content of remoteip.conf ``` RemoteIPHeader CF-Connecting-IP RemoteIPTrustedProxy 10.0.0.0/8 RemoteIPTrustedProxy 172.16.0.0/12 RemoteIPTrustedProxy 192.168.0.0/16 RemoteIPTrustedProxy 173.245.48.0/20 RemoteIPTrustedProxy 103.21.244.0/22 RemoteIPTrustedProxy 103.22.200.0/22 RemoteIPTrustedProxy 103.31.4.0/22 RemoteIPTrustedProxy 141.101.64.0/18 RemoteIPTrustedProxy 108.162.192.0/18 RemoteIPTrustedProxy 190.93.240.0/20 RemoteIPTrustedProxy 188.114.96.0/20 RemoteIPTrustedProxy 197.234.240.0/22 RemoteIPTrustedProxy 198.41.128.0/17 RemoteIPTrustedProxy 162.158.0.0/15 RemoteIPTrustedProxy 104.16.0.0/12 RemoteIPTrustedProxy 172.64.0.0/13 RemoteIPTrustedProxy 131.0.72.0/22 RemoteIPTrustedProxy 2400:cb00::/32 RemoteIPTrustedProxy 2606:4700::/32 RemoteIPTrustedProxy 2803:f800::/32 RemoteIPTrustedProxy 2405:b500::/32 RemoteIPTrustedProxy 2405:8100::/32 RemoteIPTrustedProxy 2a06:98c0::/29 RemoteIPTrustedProxy 2c0f:f248::/32 ``` and now because nextcloud is seeing all the different ip addresses it doesn't throttle the connections anymore!
fedilink

I left the headline like the original, but I see this as a **massive win** for Apple. The device is ridiculously expensive, isn't even on sale yet and *already* has 150 apps specifically designed for that. If Google did this, it wouldn't even get 150 dedicated apps even years after launch (and the guaranteed demise of it) and even if it was something super cheap like being made of fucking cardboard. This is something that as an Android user I envy a lot from the Apple ecosystem. Apple: this is a new feature => devs implement them in their apps the very next day even if it launches officially in 6 months. Google: this is a new feature => devs ignore it, apps start to support it after 5-6 Android versions
fedilink

Uid/gid in docker containers don’t match the uid/gid on the server?
Installed a new debian server, installed docker, but then now i have a problem with permissions on passed directories. On the previous server, the uid/gids inside the docker container match the uid/gid on the real server. Root is 0, www-data is 33, and so on. On this new server, instead, files owned by root (0) in the container are translated to 1000 on the server, www-data (33) is 100032, and so on (+1000 appended to the uid) Is this normal or did I misconfigure something? On the previous server I was running everything as root (the interactive user was root), and i would like to avoid that
fedilink

There’s a way to know how much i am spending/going to spend with Amazon S3?
I have several TB of borg backups. Uploaded them on backblaze b2. I could immediately see how much resources i was using, how many api calls, and so on. Very easy to see and predict the next bill. I can see exactly which bucket uses more resource, and which is growing over time. Because I'm cheap, I want to upload those files on aws glacier, which theoretically costs a quarter of b2 for storage, but API calls are extremely expensive. So I want to know the details. I won't like to get a bill with $5 in storage and $500 in API calls. Uploaded a backup, but nowhere in AWS I can see how much resources i am using, how much I'm going to pay, how many API calls, how much the user XYZ spent, and so on. It looks like it's **designed** for an approach like "just use our product freely, don't worry about pricing, it's a problem for the financial department of your company". In AWS console I found "s3 storage lens", but it says i need to delegate the access to someone else because reasons. Tried to create another user in my 1-user org, but after wasting 2 hours I wasn't able to find a way to add those permissions. Tried to create a dashboard in "AWS cost explorer" but all the indicators are null or zero. So, how can I see how many API calls and storage is used, to predict the final bill? Or the only way is to pray and wait the end of the month and hopefully there everything it's itemized in detail?
fedilink

I guess that means it's dead, as there's no way a corporation would pay millions to acquire a competitor just to continue developing a free alternative to their own product
fedilink

How I accidentally wiped my server by having a typo in my Nextcloud docker config
So, I moved my nextcloud directory from a local SATA drive to a NFS mount from a nvme array on a 10G network "I just need to change `/docker/nextcloud` to `/mnt/nfs/nextcloud` in the `docker-compose.yml`, what's the issue, i do it live" - i tell myself So i stop the container, copy `/docker/nextcloud` to `/mnt/nfs/nextcloud`, then edit the `docker-compose.yml`... and.... because I'm doing it during a phone call without paying too much attention i change the main directory to `/docker` I rebuild the container and I immediately hear a flood of telegram notifications from my uptime-kuma bot.... oh oh... Looks like the nextcloud docker image has an initialization script that if it doesn't find the files in the directory, it will delete everything and install a fresh copy of nextcloud... so it deleted **everything** on my server Luckily i had a very recent full borg backup and i'm restoring it (i kinda love-hate borg, i always forget the restore commands when in panic and the docs are a bit cryptic for me) Lessons learned: 1. always double check everything 2. offsite backups are a must (if i accidentally wrote `/` as path, i would have lost also the borg backups!) 3. offsite backups should not be permanently mounted, otherwise they would have been wiped as well 3. learn how to use and schedule filesystem snapshots, so the recovery wouldn't take ages like it's taking right now (2+ hours and i'm not even half way...)
fedilink

NFS or iSCSI?
So, i got persuaded to switch from a "server that is going to do everything" to "compute server + storage server" The two are connected via a DAC on an intel x520 network card. Compute is 10.0.0.1, Storage is 10.255.255.254 and i left the usable hosts in the middle for future expansion. Before I start to use it, I'm wondering if i chose the right protocols to share data between them. I set NFS and iSCSI. With iSCSI i create an image, share that image on the compute server, format it as btrfs, use it as a native drive. Files are not accessible anywhere else. With NFS i just mount the share and files can be accessed from another computer. Speed: I tried to time how long it takes to fill a dummy file with zeroes. ``` /iscsi# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" 250000+0 records in 250000+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.88393 s, 2.3 GB/s real 0m2.796s user 0m0.051s sys 0m0.915s ``` ``` /nfs# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" 250000+0 records in 250000+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 2.41414 s, 848 MB/s real 0m3.539s user 0m0.038s sys 0m1.453s ``` ``` /sata-smr-wd-green-drive-for-fun# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" 250000+0 records in 250000+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 10.1339 s, 202 MB/s real 0m46.885s user 0m0.132s sys 0m2.423s ``` what i see from this results: the sata slow drive goes at 1.6 gigabit/s but then for some reason the computer needs so much time to acknowledge the operation. nfs transferred it at 6.8 gigabit/s which is what i expected from a nvme array. Same command on the storage server gives similar speed. iscsi transfers at 18.4 gigabit/s which is not possible with my drives and the fiber connection. Probably is using some native file system trickery to detect "it's just a file full of zeroes, just tell the user it's done" The biggest advantage of NFS is that I can share a whole directory and get direct access. Also sharing another disk image via iscsi requires a service restart which means i have to take down the compute server. But with iscsi i am the owner of the disk so i can do whatever i want, don't need to worry about permissions, i am root, chown all the stuff So... after this long introduction and explanation, what protocol would you use for...: * /var/lib/mysql - a database. Inside a disk image shared via iscsi or via nfs? * virtual machine images. Copy them inside another image that's then shared via iscsi? Maybe nfs is much better for this case. Otherwise with iscsi i would have a single giant disk image that contains other disk images... * lots of small files like WordPress. Maybe nfs would add too much overhead? But it would be much easier to backup if it was an NFS share instead of a disk image
fedilink

New California law limits cash to crypto at ATM machines at $1000 per day per person and also the fees that can be imposed by the machines. The industry says this will hurt the business, hinting that they're profiting from the lack of KYC policies I don't see any legitimate use from those machines. Who would have a legit need to exchange $15k from cash to crypto at 33% fees????
fedilink

Sorry, another news from this asshole, but this is too much assholery to don't be shared Despite him being a shitty boss that fired employees that criticized him on twitter, he promised an "unlimited" legal defense fund to fight against employers that fired employees because of something they wrote on Twitter. Under his tweet a lot of "verified" (=right wing) accounts plauded this and asked to fight employers who fired employees for having written something homophobic
fedilink

I tried to delete and recreate the container, but there's still this insane power consumption. For comparison, cloudflared doing a tunnel to hundreds of users takes 0.06% of CPU time on the same server
fedilink


Anyone using “docker run” instead of “docker compose”?
For the vast majority of docker images, the documentation only mention a super long and hard to understand "docker run" one liner. Why nobody is placing an example docker-compose.yml in their documentation? It's so tidy and easy to understand, also much easier to run in the future, just set and forget. If every image had an yml to just copy, I could get it running in a few seconds, instead I have to decode the line to become an yml I want to know if it's just me that I'm out of touch and should use "docker run" or it's just that an "one liner" looks much tidier in the docs. Like to say "hey just copy and paste this line to run the container. You don't understand what it does? Who cares" The worst are the ones that are piping directly from curl to "sudo bash"...
fedilink

Is outline VPN still a good option to host?
In a few months I need to go in an authoritarian country for work and I need a VPN. Last time (2018) the only one it worked reliably was outline based on a self hosted VM. But went back to see and it looks like abandoned. Link rot in home page if I select my language, no docker install in 2023 and the fact that was developed by Google is a big negative sign (not because of privacy issues, but because they get bored with their project, they abandon them and start an identical but incompatible one) There are better alternatives in 2023? I need to have as most chances to connect to the uncensored internet as otherwise I can't work and my 2 weeks work trip becomes a 2 weeks vacation where I can't even check my email or work chat
fedilink

Could my simple python program softbrick an IMAP server?
I bought a domain on OVH and since I expected little use of email on that domain, I made do with the "included" email. The one that uses roundcube 1.4 (from the last century) as webmail. One day I get an email: >Votre boite mail a atteint 90% de l'espace maximum autorisé sur le serveur. (your mailbox is full) "It must be a bug, I have 5gb of space and I have only a dozen mails in the box" - I say to myself But instead all the mails sent to this mailbox bounce with the error "mailbox full". Since the mails are of little importance, I delete everything and empty the trash, now it is empty, but incoming mails are rejected with "mailbox full" I open a ticket, they don't believe it at first, but after sending screenshots of both the roundcube quota and their panel showing that the usage is zero, they restore my access. After a couple days I again get the usual email in French "your box is full". I reopen the ticket and they tell me that everything is normal and that it is the fault of my email client. Now that i think about, I did some experiments with python to see if I could see the mails on the server, but I put read-only access, I don't think it's my fault... Here is the code: ``` import imaplib import email from datetime import datetime, timedelta from sendmail import SendMail from connection import Connection class GetMail: def __init__(self, connection): self.connection = connection def fetch_emails(self): # Connessione all'account IMAP imap = imaplib.IMAP4_SSL(self.connection.imap_server) imap.login(self.connection.username, self.connection.password) imap.select('INBOX', readonly=True) imap.search(None, 'ALL') email_ids = data[0].split() email_ids = email_ids[:quantemail] for email_id in email_ids: status, data = imap.fetch(email_id, '(BODY[HEADER.FIELDS (FROM SUBJECT)])') raw_email = data[0][1] print('Raw Email:\n', raw_email.decode('utf-8'), '\n') if email_ids: try: email_index = int(input("Inserisci l'indice dell'email da scaricare: ")) except: email_index = -1 if 1 <= email_index <= len(email_ids): email_id = email_ids[email_index - 1] status, data = imap.fetch(email_id, '(RFC822)') raw_email = data[0][1] email_message = email.message_from_bytes(raw_email) eml_filename = f"email_temp.eml" with open(eml_filename, 'wb') as eml_file: eml_file.write(raw_email) print("Email scaricata:", eml_filename) else: print("Indice email non valido.") else: print("Nessuna email trovata.") imap.logout() ``` It seems strange to me that if I log in one day with this program, then a few days later the box shows as full even though it is not, and deleting messages does not restore access. Maybe if you log in with "readonly=true", then the server has a bug where it puts the whole box read-only and can no longer receive mail?
fedilink

Something to search files in my LAN?
At the moment i use a super hacky and unstable setup where multiple instances of [everything](https://www.voidtools.com/) in ETP server mode run under a windows core VM, and *when it works*, it works fine. But sometimes it stops updating the index, sometimes it just crashes, sometimes serves the wrong db, and so on. So: there's a better way? Maybe a web app? Possibly with multiple users, some directories are only for me and not for the rest of the family. The solution can be also paid (but with a reasonable price as this is for personal use and i can't justify the $$$ for an enterprise solution)
fedilink

Switching from Apache to Caddy: how to use php-fpm?
Hi, some bot for some reason really needs to scrape my wordpress blog over and over again, overheating my poor celeron. For laziness I am just using the default wordpress docker image, which is php+apache. I did some experiments and with php-fpm+caddy it's much faster. Now i want to migrate all my wordpress blogs, five in total and I want to manage them from a single caddy instance. php-fpm needs to be mounted in the same path or it can be different? For example, the wordpress install places the files in /var/www/html, and so for Caddy I mount in the same path and in the Caddyfile i have: ``` test.example.com{ root * /var/www/html php_fastcgi wordpress:9000 file_server } ``` if i have multiple installs can I mount different paths like /var/www/html2 and so on (but only on caddy) or it must match both containers?
fedilink

Imho a decision that doesn't make sense, Google already has paid for all the infrastructure so running the service to them is pure profit, not to mention the lost possibility to upsell Google workspace (paid gmail with a custom domain) or Google cloud services to all their customers. Although I would never subscribe to something from Google for my business, then when they eventually change mind and kill my product, i have to migrate in an hurry. I've been hurted too many times
fedilink

Finally after so many years the "never combine labels" setting is back and i can uninstall explorerpatcher
fedilink