• 20 Posts
  • 297 Comments
Joined 1Y ago
cake
Cake day: Jun 02, 2023

help-circle
rss

Next truenas version replaces kubernetes with docker compose - you could try a nightly to see if that works for you


I would have assumed that maps are disabled by default and all requests proxied by the server to some mapbox api that has been set by the admin


It doesn’t have an option to split it?

When I did my Google takeout to delete all my pics from Google photos there was an option to split in like “one zip every 2gb”


Minio is definitely not designed to be self hosted on a small server by normal people but more for enterprise use where you have multiple servers and you’re paying hundreds of thousands of dollars for support



when i had only the file server, i turned on via WOL each time i actually needed it and a script shut it down if there was no activity after 11pm

now i host so much stuff and i’m so dependant on it that it requires redundant power and failover WAN via 5g…



If you want to just use it exclusively as a Nas, then why not truenas?

I have a unRAID server but the nas part is nowhere as good as truenas (slower, worse ad integration)

Main issue with virtualization is the bootable USB with the serial number that’s used as DRM


0.69? Here they take 0.89 which is a nice 1000% markup over the electricity price. And the shame is that everyone copied the insane pricing, still blaming “sorry the war in Ukraine forces us to double the prices” even if they now came back to what they were in the beginning of 2021


Looks like the team knew what was going on, three weeks ago the main European competitor hired this Tesla supercharger manager as CEO https://newmobility.news/2024/04/18/ionity-snaps-up-tesla-supercharger-europe-boss-as-new-ceo/


With this move Tesla effectively gave up on the $7.5 billion package from the us government to build new infrastructure. Since it was paid with private money, it doesn’t require to be nationalized. It’s also accessible to anyone, with prices that are reasonable

Also, I don’t know how efficient is the government in the usa, but in my country the chargers built by the semi-nationalized electric company are almost always broken because they don’t really care about profits, they have the unlimited government budget, so what’s the issue if a charger breaks and gets fixed after 2 years, making zero revenue during that time?


Any H1B visa hostage can pick up someone else’s work on xitter for slave hours, but for cars you need experience, talent and know how. If you fire a whole team to save $x, then you gotta pay $5x to rebuild that team when you eventually need it. (Tesla probably needs a team to develop new models, eventually. Same for the policy team, useful to launch the auto taxi when it exits from beta in 2045


not only the board won’t fire him, but probably would even vote towards approving that bonus he’s craving



intel gpu = any integrated graphics from any intel cpu made in the last 8 years. This includes those crusty $10 celerons, don’t need a dedicated intel arc gpu (unless you’re streaming to dozens users at the same time)

detail of supported formats https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video


nvme for videos seems expensive for nothing, unless you are serving 4k videos over a 10gbit connection to multiple users


Owncloud infinite scale is a rewrite of owncloud(=nextcloud) in go, it supports local, nfs and S3 mounts. Change the smb share to nfs and it might fit you

Disadvantages are:

  1. All the plugins need to be rewritten, so if you need some extra feature, it’s going to be missing
  2. They got acquired by a company that sells an expensive alternative for corporations (RIP? Who is paying millions to maintain a free alternative/competitor?)
  3. Documentation is inferior, community is much smaller

Bought two and one of those died within 72 hours.

It was really weird, first it became read-only, then it zeroed by itself, but it still was read-only, no program was able to write on it, even aban (dban is dead)

Now the replacement has more than 2 years but i downgraded it in a low activity server


I’m also using that drive but it likes to stay toasty, it’s always in a 60-65° C range even with a low activity

I don’t really like that. Bought an heatsink and it improved a bit


I noticed that the prices of SSD almost doubled in the last months. I bought a 2tb nvme for 89 euro and now it requires almost the double

WD and Seagate are using the AI hype as an excuse to increase prices on both SSD and HDD. They say AI bros are buying too many drives to store the models. I find this not really believable. Normal models are a few hundred GB, I don’t think that they’re pushing so much the demand


I’m using nextcloud and I like it (also I don’t see all this slowness even if I run it on a core i3 8100) but it’s the general stance from the devs

Everything it’s announced like it’s ready to the public when it’s just a proof of concept (not even alpha)

Another example is the mail plugin. It’s an unusable early alpha yet on the blog there are three posts starting from four years ago talking about inexistent features https://nextcloud.com/?s=Mail&wpessid=1612

Same for the forms plugin. Early alpha that doesn’t have an essential feature like emailing responses to specific addresses (it sends notifications via nextcloud). Again the blog talks 4 years ago like it’s ready for everyone.

Or the Trello clone. Many problems like it “ruins” the tasks sync by creating read-only tasks that get synced via caldav.

Or nextcloud photos, big post in 2022 but it’s very barebone

Or docs, so many posts yet it has so many problems.

Or the desktop client, where builds are pushed to regular users without testing the installer script (forced reboots without confirmation, crashing explorer.exe instead of asking a graceful restart)

The only NC plugin that I’m using without problems and that I feel it passed the beta stage is Music and its subsonic compatible server. No blog posts about it. Maybe because it’s hosted on owncloud GitHub repository


Doesn’t help that every nextcloud official announcement promises the moon while delivering not even stardust.

Example: this blog post from two years ago: https://nextcloud.com/blog/plan-your-next-trip-with-nextcloud-maps-new-features/

None of the features written in that post are available, even today

It’s something that it might be coming in a decade if someone is inspired by the mockups and codes it. When you install the maps plugin it shows a map of the world, and that’s it.

If they need to announce a concept that only exists as a mockup, either publish the news on April 1st or write “concept of how maps might integrate with nextcloud 50”


I personally don’t know anyone who uses the in car gps over google/apple maps anyway.

And this is why the newer GM cars don’t support android auto / carplay


If need unlimited cheap accounts: MXroute. Sometimes he does lifetime promos. For webmail he has a custom version of roundcube with some paid plugins that have a Gmail like skin or another paid webmail like crossover. He used to offer afterlogic webmail but then stopped “because nobody’s using that and it’s hard to set for alias domains”. Pity because I liked the aesthetic. Can set forwarding from unlimited aliases to Gmail but this is monitored. If you receive (and forward) too much spam or use it to send thousands of useless activity notifications, he’s going to block or throttle that because he wants to keep his sender reputation high. For example he doesn’t forward any email from Facebook or Wordfence notifications

If need a single inbox: Zoho mail. Can set a catch all on unlimited alias domains that goes in the same inbox. And if a specific address needs to be blocked, for example you signed up to temu using temu@example.com and then they’re bombarding you with endless spam and ignoring your stop requests, you can set to reject all emails directed to that temu@ account. Emails can be forwarded but only if you set a custom filter in the web mail, it’s a bit limited

I am paying for both, monthly for Zoho and a lifetime for mxroute (lifetime = mxroute it’s a single man operation, so it’s not my lifetime rather… *KNOCKS WOOD*)


I am using Zoho mail and I like it a lot but there are two disadvantages:

  1. the free tier has no IMAP support
  2. The web app for some reason doesn’t allow to login to two separate accounts at the same time. Only the electron app, that’s just a glorified WebView of the web app, allows multiple account support, for some reason. I have three paid accounts ($1/month) and I’m a bit annoyed by that, I have to use three different browsers or Firefox containers to switch accounts.

For the rest is excellent, the spam filter can be finely tuned in the admin panel like “block all domains like xxx” or “block all emails that contain those words”. And you can set to bounce “address not found” to annoy the worst offenders that don’t respect your privacy. And after a very short training (1 week!), it’s very rarely wrong, unlike Gmail. If it’s in spam, it’s definitely spam, if it’s in the inbox it’s 95% ok. Unfortunately you can’t block entire TLDs like .su or .monster which are exclusively used by spammers

And the webmail is very pretty and chock full of features never saw anywhere else in a web client. For example, you can add a task or add a note to an email and you can tag another user and have a parallel conversation around the content of it. Like tagging a colleague to ask opinion on that. The web client can also add IMAP accounts from other services, and you switch between them. It keeps them separate, doesn’t import emails like Gmail (you can add Gmail/Hotmail/whatever but you can’t add another Zoho email! Infuriating!). It’s like having a “web version of thunderbird”.


For what I saw there’s a good chance it has a custom firmware that makes them unusable outside the walled garden



So, protesting for human rights is “violating Google policies”???


I like freescout.

It is well made and aesthetically pleasing but it has four main problems

  1. The devs are complete assholes. Ask for a clarification because of missing documentation and they ban you from their GitHub repo without even replying.
  2. The program is open source but everything is under closed source paid plugins. Plugin pricing is very cheap and without subscription but in order to get the bare usable minimum you need to spend at least $50 in plugins. I had to compile an excel file with the list of 50+ plugins and rank them, so I could buy the $400 of plugins over an year, buying the most important in the beginning. Then they banned me because after the first purchases I asked help with one of their paid plugins. Well, thanks for let me save $390, I won’t buy anything anymore.
  3. Plugins have unfair regional pricing for some reason. $2 becomes €4, even with vat the math doesn’t add up. They take the amount in dollars, convert in local currency using unfavorable exchange rate and rounding up, then they add VAT and round up again. With 40 “must have” plugins, this trick becomes expensive for no reason. Why Europeans have to pay double? It’s a business expense in 90% of cases, so the vat shouldn’t be considered in the price because a business can deduct them.
  4. No support. At all. Even if you are willing to pay for it. They don’t want to set a forum, use GitHub discussions or even (ugh, I hate that but better than nothing) discord. For a program that’s used for business this is a bit of a problem. How a company can rely on a software where if something is missing from the documentation and you ask for help, you are simply banned&ignored. You discover than you’re banned when a week later you go to see if someone replied to you. I don’t see why they can’t have a $50/hour support package like anyone else.

As an alternative, there’s uvdesk. It’s similar but when I discovered it I already set freescout with the bare minimum paid plugins like “see all emails”, “see list of customers”, “send later”, “add attachment to the email”.


Webmail is comically slow and laggy even if it permanently stores every single email in the database. It also doesn’t prune them after deletion and that’s a huge privacy issue imho. The fact that it’s saving all the emails in clear on the database Is concerning for use outside self hosting. Even if it could be set to do end to end encryption for files, in this way the admin has full access to all emails from all users (well, actually, thanks to the impersonation plugin it has access to everything and it doesn’t even give a “admin has logged in” notification).

Also, even if it’s this slow, it’s incredibly barebone.

I replaced it with snappymail which is way faster even if it doesn’t cache any email in the database

I tried many webmail software and I didn’t see any of them storing emails on the database. Afterlogic, roundcube, squirrel, they’re all reading them from IMAP with no DB. But they’re much faster. What’s the point of storing millions of emails on a DB if the software is 50x slower than something that doesn’t do that?


I’m guessing if they want to run syncthing they have a lot of data to store and linode/do is expensive for that


Right i forgot about atom and vs code, the quality of those is excellent

Obsidian too


ok, but in this case, it’s just a webview of mail.proton.me

it doesn’t have hardware access, it doesn’t work offline (technically, if it cached the files it can work offline, but can’t work offline-offline like thunderbird), it doesn’t work with the filesystem, it doesn’t interface with the OS and installed OS packages, it doesn’t use other native binaries, doesn’t use more native networking capabilities, etc…

from what i saw, the electron apps that are to be considered real apps and not just a lazy webview around the webapp are:

  • bitwarden
  • crickets


Wait for the next month release of scale, it very surprisingly comes with jails. You install a Linux distro in a jail then run gluetun+qbit via docker


i always tell myself to write something like this but then i never have the time or the willing to do so…

i’m interested too if there’s this


Only the second day? You’re optimistic


Easy: I make a Borg repository not only for a single server but for each directory. In this way if I need a file from nextcloud with an extremely generic name like “config” I only search in there and not sift between 100k similarly named files


Not sure if it makes sense a €60/year subscription for visualizing via web a collection of pirated/cracked games (this is the old crackpipe, right?)


Wow

And for a state sponsored attacker is cheaper to bribe (or threaten to kill, even cheaper) the single developer to add a backdoor than all the research to find a zero day


I decided that I will update the nextcloud (windows) desktop client once or twice a decade
I've enough. Last year the automatic updater was rebooting windows without any warning after the uac prompt. The problem continued for **months** before being fixed This year I got an update a week. Very annoying to get the same "why u no reboot? I need updates" question every single time I turn on my PC. Today when updating it kills explorer.exe without any confirmation and doesn't bring it back to life. I don't think that their paid enterprise customers are doing the ~~beta~~ alpha testers like this. Is it really necessary to push nightlies to end users? It can't be tested casually for a couple of days then pushed? I disabled the updates check and will update the nextcloud desktop client manually every 5 years if I can remember. Added an exception to Winget so it doesn't update it. I lost my patience.
fedilink

How I accidentally slowed down my nextcloud instance for months
I am running this docker image: https://github.com/nextcloud/docker with a cloudflare tunnel, meaning the webserver would see all the traffic coming from a single ip in 172.16.0.0/12 . The documentation says: > The apache image will replace the remote addr (IP address visible to Nextcloud) with the IP address from X-Real-IP if the request is coming from a proxy in 10.0.0.0/8, 172.16.0.0/12 or 192.168.0.0/16 by default So I thought that this is a not a problem, as other docker images can also automagically figure out the real IP address from traffic coming from cloudflare tunnels. In the beginning it worked fine, then it was SLOW. Like 2 full minutes to load new feeds on news, waiting ages to complete a sync, and so on. I rebooted the server on those instances, and then it worked fine *for a day*. So because at the time i was running it on unraid, i blamed the lag on that OS + my weird array of HDDs with decades of usage on them. Migrated to debian on a nvme array and... same lag! Wasted hours trying to use caddy+fpm instead of apache and it's the same, worked fine for a day, then it was slow again. Then I wondered: what if the program is "smart" and throttles it by itself without any warning to the admin if it thinks that an ip address is sending too many requests? Modified the docker compose like this: ``` nextcloud: image: nextcloud ``` became ``` nextcloud: build: . ``` and I created a Dockerfile with ``` FROM nextcloud RUN apt update -y && apt upgrade -y RUN apt install -y libbz2-dev RUN docker-php-ext-install bz2 RUN a2enmod rewrite remoteip COPY remoteip.conf /etc/apache2/conf-enabled/remoteip.conf ``` with this as the content of remoteip.conf ``` RemoteIPHeader CF-Connecting-IP RemoteIPTrustedProxy 10.0.0.0/8 RemoteIPTrustedProxy 172.16.0.0/12 RemoteIPTrustedProxy 192.168.0.0/16 RemoteIPTrustedProxy 173.245.48.0/20 RemoteIPTrustedProxy 103.21.244.0/22 RemoteIPTrustedProxy 103.22.200.0/22 RemoteIPTrustedProxy 103.31.4.0/22 RemoteIPTrustedProxy 141.101.64.0/18 RemoteIPTrustedProxy 108.162.192.0/18 RemoteIPTrustedProxy 190.93.240.0/20 RemoteIPTrustedProxy 188.114.96.0/20 RemoteIPTrustedProxy 197.234.240.0/22 RemoteIPTrustedProxy 198.41.128.0/17 RemoteIPTrustedProxy 162.158.0.0/15 RemoteIPTrustedProxy 104.16.0.0/12 RemoteIPTrustedProxy 172.64.0.0/13 RemoteIPTrustedProxy 131.0.72.0/22 RemoteIPTrustedProxy 2400:cb00::/32 RemoteIPTrustedProxy 2606:4700::/32 RemoteIPTrustedProxy 2803:f800::/32 RemoteIPTrustedProxy 2405:b500::/32 RemoteIPTrustedProxy 2405:8100::/32 RemoteIPTrustedProxy 2a06:98c0::/29 RemoteIPTrustedProxy 2c0f:f248::/32 ``` and now because nextcloud is seeing all the different ip addresses it doesn't throttle the connections anymore!
fedilink

I left the headline like the original, but I see this as a **massive win** for Apple. The device is ridiculously expensive, isn't even on sale yet and *already* has 150 apps specifically designed for that. If Google did this, it wouldn't even get 150 dedicated apps even years after launch (and the guaranteed demise of it) and even if it was something super cheap like being made of fucking cardboard. This is something that as an Android user I envy a lot from the Apple ecosystem. Apple: this is a new feature => devs implement them in their apps the very next day even if it launches officially in 6 months. Google: this is a new feature => devs ignore it, apps start to support it after 5-6 Android versions
fedilink

Uid/gid in docker containers don’t match the uid/gid on the server?
Installed a new debian server, installed docker, but then now i have a problem with permissions on passed directories. On the previous server, the uid/gids inside the docker container match the uid/gid on the real server. Root is 0, www-data is 33, and so on. On this new server, instead, files owned by root (0) in the container are translated to 1000 on the server, www-data (33) is 100032, and so on (+1000 appended to the uid) Is this normal or did I misconfigure something? On the previous server I was running everything as root (the interactive user was root), and i would like to avoid that
fedilink

There’s a way to know how much i am spending/going to spend with Amazon S3?
I have several TB of borg backups. Uploaded them on backblaze b2. I could immediately see how much resources i was using, how many api calls, and so on. Very easy to see and predict the next bill. I can see exactly which bucket uses more resource, and which is growing over time. Because I'm cheap, I want to upload those files on aws glacier, which theoretically costs a quarter of b2 for storage, but API calls are extremely expensive. So I want to know the details. I won't like to get a bill with $5 in storage and $500 in API calls. Uploaded a backup, but nowhere in AWS I can see how much resources i am using, how much I'm going to pay, how many API calls, how much the user XYZ spent, and so on. It looks like it's **designed** for an approach like "just use our product freely, don't worry about pricing, it's a problem for the financial department of your company". In AWS console I found "s3 storage lens", but it says i need to delegate the access to someone else because reasons. Tried to create another user in my 1-user org, but after wasting 2 hours I wasn't able to find a way to add those permissions. Tried to create a dashboard in "AWS cost explorer" but all the indicators are null or zero. So, how can I see how many API calls and storage is used, to predict the final bill? Or the only way is to pray and wait the end of the month and hopefully there everything it's itemized in detail?
fedilink

I guess that means it's dead, as there's no way a corporation would pay millions to acquire a competitor just to continue developing a free alternative to their own product
fedilink

How I accidentally wiped my server by having a typo in my Nextcloud docker config
So, I moved my nextcloud directory from a local SATA drive to a NFS mount from a nvme array on a 10G network "I just need to change `/docker/nextcloud` to `/mnt/nfs/nextcloud` in the `docker-compose.yml`, what's the issue, i do it live" - i tell myself So i stop the container, copy `/docker/nextcloud` to `/mnt/nfs/nextcloud`, then edit the `docker-compose.yml`... and.... because I'm doing it during a phone call without paying too much attention i change the main directory to `/docker` I rebuild the container and I immediately hear a flood of telegram notifications from my uptime-kuma bot.... oh oh... Looks like the nextcloud docker image has an initialization script that if it doesn't find the files in the directory, it will delete everything and install a fresh copy of nextcloud... so it deleted **everything** on my server Luckily i had a very recent full borg backup and i'm restoring it (i kinda love-hate borg, i always forget the restore commands when in panic and the docs are a bit cryptic for me) Lessons learned: 1. always double check everything 2. offsite backups are a must (if i accidentally wrote `/` as path, i would have lost also the borg backups!) 3. offsite backups should not be permanently mounted, otherwise they would have been wiped as well 3. learn how to use and schedule filesystem snapshots, so the recovery wouldn't take ages like it's taking right now (2+ hours and i'm not even half way...)
fedilink

NFS or iSCSI?
So, i got persuaded to switch from a "server that is going to do everything" to "compute server + storage server" The two are connected via a DAC on an intel x520 network card. Compute is 10.0.0.1, Storage is 10.255.255.254 and i left the usable hosts in the middle for future expansion. Before I start to use it, I'm wondering if i chose the right protocols to share data between them. I set NFS and iSCSI. With iSCSI i create an image, share that image on the compute server, format it as btrfs, use it as a native drive. Files are not accessible anywhere else. With NFS i just mount the share and files can be accessed from another computer. Speed: I tried to time how long it takes to fill a dummy file with zeroes. ``` /iscsi# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" 250000+0 records in 250000+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.88393 s, 2.3 GB/s real 0m2.796s user 0m0.051s sys 0m0.915s ``` ``` /nfs# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" 250000+0 records in 250000+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 2.41414 s, 848 MB/s real 0m3.539s user 0m0.038s sys 0m1.453s ``` ``` /sata-smr-wd-green-drive-for-fun# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" 250000+0 records in 250000+0 records out 2048000000 bytes (2.0 GB, 1.9 GiB) copied, 10.1339 s, 202 MB/s real 0m46.885s user 0m0.132s sys 0m2.423s ``` what i see from this results: the sata slow drive goes at 1.6 gigabit/s but then for some reason the computer needs so much time to acknowledge the operation. nfs transferred it at 6.8 gigabit/s which is what i expected from a nvme array. Same command on the storage server gives similar speed. iscsi transfers at 18.4 gigabit/s which is not possible with my drives and the fiber connection. Probably is using some native file system trickery to detect "it's just a file full of zeroes, just tell the user it's done" The biggest advantage of NFS is that I can share a whole directory and get direct access. Also sharing another disk image via iscsi requires a service restart which means i have to take down the compute server. But with iscsi i am the owner of the disk so i can do whatever i want, don't need to worry about permissions, i am root, chown all the stuff So... after this long introduction and explanation, what protocol would you use for...: * /var/lib/mysql - a database. Inside a disk image shared via iscsi or via nfs? * virtual machine images. Copy them inside another image that's then shared via iscsi? Maybe nfs is much better for this case. Otherwise with iscsi i would have a single giant disk image that contains other disk images... * lots of small files like WordPress. Maybe nfs would add too much overhead? But it would be much easier to backup if it was an NFS share instead of a disk image
fedilink

New California law limits cash to crypto at ATM machines at $1000 per day per person and also the fees that can be imposed by the machines. The industry says this will hurt the business, hinting that they're profiting from the lack of KYC policies I don't see any legitimate use from those machines. Who would have a legit need to exchange $15k from cash to crypto at 33% fees????
fedilink

Sorry, another news from this asshole, but this is too much assholery to don't be shared Despite him being a shitty boss that fired employees that criticized him on twitter, he promised an "unlimited" legal defense fund to fight against employers that fired employees because of something they wrote on Twitter. Under his tweet a lot of "verified" (=right wing) accounts plauded this and asked to fight employers who fired employees for having written something homophobic
fedilink

I tried to delete and recreate the container, but there's still this insane power consumption. For comparison, cloudflared doing a tunnel to hundreds of users takes 0.06% of CPU time on the same server
fedilink


Anyone using “docker run” instead of “docker compose”?
For the vast majority of docker images, the documentation only mention a super long and hard to understand "docker run" one liner. Why nobody is placing an example docker-compose.yml in their documentation? It's so tidy and easy to understand, also much easier to run in the future, just set and forget. If every image had an yml to just copy, I could get it running in a few seconds, instead I have to decode the line to become an yml I want to know if it's just me that I'm out of touch and should use "docker run" or it's just that an "one liner" looks much tidier in the docs. Like to say "hey just copy and paste this line to run the container. You don't understand what it does? Who cares" The worst are the ones that are piping directly from curl to "sudo bash"...
fedilink

Is outline VPN still a good option to host?
In a few months I need to go in an authoritarian country for work and I need a VPN. Last time (2018) the only one it worked reliably was outline based on a self hosted VM. But went back to see and it looks like abandoned. Link rot in home page if I select my language, no docker install in 2023 and the fact that was developed by Google is a big negative sign (not because of privacy issues, but because they get bored with their project, they abandon them and start an identical but incompatible one) There are better alternatives in 2023? I need to have as most chances to connect to the uncensored internet as otherwise I can't work and my 2 weeks work trip becomes a 2 weeks vacation where I can't even check my email or work chat
fedilink

Could my simple python program softbrick an IMAP server?
I bought a domain on OVH and since I expected little use of email on that domain, I made do with the "included" email. The one that uses roundcube 1.4 (from the last century) as webmail. One day I get an email: >Votre boite mail a atteint 90% de l'espace maximum autorisé sur le serveur. (your mailbox is full) "It must be a bug, I have 5gb of space and I have only a dozen mails in the box" - I say to myself But instead all the mails sent to this mailbox bounce with the error "mailbox full". Since the mails are of little importance, I delete everything and empty the trash, now it is empty, but incoming mails are rejected with "mailbox full" I open a ticket, they don't believe it at first, but after sending screenshots of both the roundcube quota and their panel showing that the usage is zero, they restore my access. After a couple days I again get the usual email in French "your box is full". I reopen the ticket and they tell me that everything is normal and that it is the fault of my email client. Now that i think about, I did some experiments with python to see if I could see the mails on the server, but I put read-only access, I don't think it's my fault... Here is the code: ``` import imaplib import email from datetime import datetime, timedelta from sendmail import SendMail from connection import Connection class GetMail: def __init__(self, connection): self.connection = connection def fetch_emails(self): # Connessione all'account IMAP imap = imaplib.IMAP4_SSL(self.connection.imap_server) imap.login(self.connection.username, self.connection.password) imap.select('INBOX', readonly=True) imap.search(None, 'ALL') email_ids = data[0].split() email_ids = email_ids[:quantemail] for email_id in email_ids: status, data = imap.fetch(email_id, '(BODY[HEADER.FIELDS (FROM SUBJECT)])') raw_email = data[0][1] print('Raw Email:\n', raw_email.decode('utf-8'), '\n') if email_ids: try: email_index = int(input("Inserisci l'indice dell'email da scaricare: ")) except: email_index = -1 if 1 <= email_index <= len(email_ids): email_id = email_ids[email_index - 1] status, data = imap.fetch(email_id, '(RFC822)') raw_email = data[0][1] email_message = email.message_from_bytes(raw_email) eml_filename = f"email_temp.eml" with open(eml_filename, 'wb') as eml_file: eml_file.write(raw_email) print("Email scaricata:", eml_filename) else: print("Indice email non valido.") else: print("Nessuna email trovata.") imap.logout() ``` It seems strange to me that if I log in one day with this program, then a few days later the box shows as full even though it is not, and deleting messages does not restore access. Maybe if you log in with "readonly=true", then the server has a bug where it puts the whole box read-only and can no longer receive mail?
fedilink

Something to search files in my LAN?
At the moment i use a super hacky and unstable setup where multiple instances of [everything](https://www.voidtools.com/) in ETP server mode run under a windows core VM, and *when it works*, it works fine. But sometimes it stops updating the index, sometimes it just crashes, sometimes serves the wrong db, and so on. So: there's a better way? Maybe a web app? Possibly with multiple users, some directories are only for me and not for the rest of the family. The solution can be also paid (but with a reasonable price as this is for personal use and i can't justify the $$$ for an enterprise solution)
fedilink

Switching from Apache to Caddy: how to use php-fpm?
Hi, some bot for some reason really needs to scrape my wordpress blog over and over again, overheating my poor celeron. For laziness I am just using the default wordpress docker image, which is php+apache. I did some experiments and with php-fpm+caddy it's much faster. Now i want to migrate all my wordpress blogs, five in total and I want to manage them from a single caddy instance. php-fpm needs to be mounted in the same path or it can be different? For example, the wordpress install places the files in /var/www/html, and so for Caddy I mount in the same path and in the Caddyfile i have: ``` test.example.com{ root * /var/www/html php_fastcgi wordpress:9000 file_server } ``` if i have multiple installs can I mount different paths like /var/www/html2 and so on (but only on caddy) or it must match both containers?
fedilink

Imho a decision that doesn't make sense, Google already has paid for all the infrastructure so running the service to them is pure profit, not to mention the lost possibility to upsell Google workspace (paid gmail with a custom domain) or Google cloud services to all their customers. Although I would never subscribe to something from Google for my business, then when they eventually change mind and kill my product, i have to migrate in an hurry. I've been hurted too many times
fedilink

Finally after so many years the "never combine labels" setting is back and i can uninstall explorerpatcher
fedilink