That’s a cute idea but it completely ignores that it isn’t 2005 anymore. Algorithms are good enough to detect that.
Somewhere between Linux woes, gaming, open source, 3D printing, recreational coding, and occasional ranting.
🇬🇧 / 🇩🇪
That’s a cute idea but it completely ignores that it isn’t 2005 anymore. Algorithms are good enough to detect that.
Loss of control of this data would be catastrophic, so I took its security very seriously.
Ask yourself: “If my current system is unavailable: How screwed am I?”
If the answer is anything less than “Not screwed at all!”, then it is time for a backup - regardless of what system you’re using or plan to use.
At all stages I want to be left alone and just do what I’m paid for.
So Arch now is a corporate distribution?
The screen capture protocol was merged a month ago.
That’s part of my issue I have with Wayland protocols. It was added a month ago. After several years! During research I found discussions ~6 years old, this PR was 2 years old, and superseded a 4 years old other request.
In the meantime some environments implemented that on their own without waiting for the protocol. If I understand correctly: Gnome as well as KDE have implemented it outside the protocol. And Hyprland devs forked wlroots to advance development faster and also add that. (Correct me if I’m wrong.)
Since labwc uses wlroots (but is a bit slow with adapting to new versions) it will take quite some time before I can put a checkmark after my last usecase. I am optimistic that it will work. But I accepted that it may take several years to add new functionality and a few months before the functionality arrives in wlroots and at some point after that in labwc.
No one will use a fork of Wayland. That would be suicide.
Famous last words …
You cannot even record single windows without having your DE patching that in for you.
X11 […] has become an unmaintainable patchwork of additions.
Wayland will be an unmaintainable patchwork of protocols, once it will have the same functionality as X11 has.
Now, 12 years later, it still is not production ready.
I use it on both my laptop and my desktop computer. It got better during the last 1-2 years.
While my laptop (13" 1080p screen) is pretty much fine running with Hyprland on an integrated Intel GPU, my desktop computer with a 28" 4K screen scaling is messed up completely and needs tweaking, sometimes down to a per-program base. Sometimes the font is gigantic sometimes I need a microscope to see anything. That was definitely better on X11.
On my desktop I run labwc, that does not come with own functionality regarding this: I just recently got whole-screen video recording and now have to wait likely another year or two for single-window recording. (There is a protocol for this, that took two years to be merged, which is just ridiculous for such a low-level base functionality that should be implemented from the beginning on.)
Other than that, all my common programs are running okay with Wayland.
I personally think it is a very bad idea to “speed run development” of protocols.
Stalling the development of protocols for nearly a decade is bad, too.
They should talk and meet somewhere between “Just develop in production!” and “I personally dislike it for non-technical reasons, so I will block it for everyone!”
I actually just run the update commands individually when I feel like.
su -l 'pacman -Syu' # All regular packages
pakku -Syu # All AUR packages (I know this updates regular packages, too.)
flatpak-update # Update Flatpak packages with a function I wrote
Since I do not trust Flatpak (especially when it comes to driver updates and properly removing unused crap) I once created this monstrosity.
flatpak-update () {
LATEST_NVIDIA=$(flatpak list | grep "GL.nvidia" | cut -f2 | cut -d '.' -f5)
flatpak update
flatpak remove --unused --delete-data
flatpak list | grep org.freedesktop.Platform.GL32.nvidia- | cut -f2 | grep -v "$LATEST_NVIDIA" | xargs -o flatpak uninstall
flatpak repair
flatpak update
}
The initial problem with Flatpak thinking it would be a good idea to add dozens of Nvidia drivers and re-download and update all of them on every update (causing a few gigabytes of downloaded files on every run of a normal flatpak update
even if nothing needed to be updated) is reportedly fixed, but I just got used to my command.
Absolutely. They’re advertised for being used in datecenters, so I assume noise optimization wasn’t a concern for Seagate when creating those drives.
Sorry, I can’t hear you under my enormous piles of money! 🙃
But yeah. You should do an SSD-only setup if this is within your budget. I assume that for most of us selfhosting is just some soft of hobby. If you’re willing to spend money on the latest and cooles tech: do it. If not, then it’s fine, too.
Okay, so … then maybe really look into the Seagate Exos drives. 20 TB should be pretty much fine for most selfhosting adventures.
I’m looking for something from 4TB upwards.
If you say “harddrive” … do you mean actual harddrives or are you using it synonymous with “storage”? If you really talk about actual harddrives, it’s hard to even find datacenter/server harddrives below 4 TB. Usually server HDDs start with 8 or 12 TB. You can even find HDDs with 20 TB - Seagate Exos series for example, starting at around 360 Euros (ca. 400 USD).
If you’re in for a general storage, preferably SSD, that’s another issue. There is the Samsung 870 QVO (8 TB) SSD that is often advertised as “datacenter SSD” (so I assume it would run well in a server that is active 24/7), but it is currently available with a maximum of 8 TB. The 870 QVO is at ca. 70 Euros per terabyte (ca. 77 USD) which, in my experience, is the current price range for SSDs. So it has a high price seen from the outside but it’s actually fine. It’s also a one-time investment.
For selfhosting I’d go with an SSD-only setup.
do any have particularly good or bad reputation?
From personal experience I’d say, stick with the “larger” brands like Samsung or Seagate.
It first checks if ~/.bashrc.d
is an existing directory. If this it the case it then iterates over all entries in that directory. In this iteration it checks if the entry is a file and if this is the case it sources that file using the bash-internal shorthand .
for source
.
So it basically executes all scripts in ~/.bashrc.d
. This makes it possible for you to split your bash configuration into multiple files. This quite common and a lot of programs already support it (100% depends on the program, though).
This is absolutely harmless as it is. But: if you or a program places anything in the directory ~/.bashrc.d
it WILL be sourced everytime you start a bash.
A slightly better variant would be iterating over ~/.bashrc.d/*.sh
instead of just ~/.bashrc.d/*
to make sure to only grab files with the .sh
suffix (even if suffixes are basically meaningless from a technical point of view) and also test for the file being executable (-x
instead of -f
).
This would make sure that only files that are ending with .sh
and that are executable are sourced. The “attack vector”, if you want to call it like that, would then be a bit more narrow than just placing a file in a directory.
As for why it’s there: Did you ever touch your .bashrc
? If not, maybe it is there since the beginning because it’s in the so-called skeleton (see /etc/skel/.bashrc
) that was used to initialize certain files on user account creation.
Use XMPP. Thanks to Let’s Encrypt being implemented in basically every reverse proxy, setting it up is a matter of seconds.
*Neiße