• 0 Posts
  • 12 Comments
Joined 3 months ago
cake
Cake day: June 29th, 2024

help-circle
  • “Escape hatch” specifically refers to the speculation that Valve is positioning themselves in a way that they can’t be forced into paying fees for existing on the Windows platform, and that if push comes to shove they can say they only support Linux now. This hasn’t happened yet, but it’s a strategic stance which will likely prevent it from even beginning to happen. This doesn’t have to do with the Steam Deck specifically; it was also part of their intentions with the Steam Machine and etc.


  • Maybe it needs to be more obvious that there are many ways to do things in Linux, and give new users a short “learning to learn” primer on how things operate differently in Linux-land, and where/how to look online for help. There are always first-boot popups but I imagine most people are conditioned to click out of them without even reading; forcing people to confirm a couple times that they want to skip “very helpful reading” may cut down on people that play the search engine lottery on what information they use for their first steps.

    Also semi-related, I hope that mainstream Linux eventually “un-stupids” computers for regular people again. I get the distinct feeling that Microsoft and Apple have, at least somewhat intentionally, imposed ‘learned helplessness’ onto average computer users. “Oh computers are magic no one knows how they work. We are the only wizards that could possibly understand them and we will sell you the solution.” Windows/OSX/iOS/etc are so locked down that people have rightfully learned over time that if they run into a problem, there really is no solution. I suspect that’s permeating into the new user experience on Linux where people will encounter one problem and throw their hands up and say “fucking computers” instead of using basic problem solving to try another approach.


  • Their rough new user experience is concerning though. From what they described I suspect many of their “problems” are not actually “real”, but it doesn’t really matter because they still ended up in a scenario where they thought there were problems. How did they end up thinking that everything must be done with terminal while using Ubuntu? I know in the last ~10 years there’s been a big focus on the new user experience, so what more can be done to prevent this? My gut says there are too many online resources that are confusing new users when they try to onboard themselves - especially resources that are old, written for other distros, or written for people who just want to find the command they can copy-paste to do something.


  • Gaming has been the only pathway to mainstream desktop since forever. I’ve been around for a hot minute and I remember that consistently, the “real Linux users” for years repeated “we don’t need gaming this is an adult OS go back to Windows and play with your toys” and then turned around and whined that no one wanted to use desktop Linux. Valve stepped in and casually created the year of the Linux desktop as a side-effect of just wanting an escape hatch for their business model. Now the casuals and elitists alike will have a better experience via the magic of Marketshare, and all it really took is not listening to people that don’t know what’s good for them.


  • As for the other reasons why Soatok thinks Signal is better, well those are cherry picked and highly opinionated

    In the Signal article yes, by design those are his opinions on what the minimum requirements are for “beating Signal”, which are not all objective truths. These articles come from a response to people evangelizing one messenger or another to him, some of which have “stronger” security but negatives in other places that make them unacceptable for widespread use (especially for non-techies). In the XMPP article I think the remaining points are very fair in terms of how the XMPP ecosystem works today.

    Signal is a snake-oil vendor

    Post-quantum encryption is an active R&D field with no proven to work solutions yet

    Okay, that’s enough of my time. These replies are for the benefit of others, and I hope I’ve said enough on that for people to make their judgments with more info that what you initially were responding with.


  • That Soatok jumps on this in their article without checking what the spec actually was in previous versions makes me think they didn’t really look very closely, but rather just looked for superficial support of their preconceived opinion.

    Looking in the spec design document instead of digging through the source code is normally enough research in other projects where the spec design document is properly filled out. It’s a mistake on OMEMO’s part that the spec design document didn’t include the truncation step in 0.4.0, and this mistake was fixed in their 0.7.0 version. Either way, as I said this is a positive outcome because now we have clarification.

    I briefly recounted the points made in the article and I think this was the only one that was against OMEMO directly. Soatok made another post days earlier about why nothing on the market is currently better than Signal, and it makes sense that the other 3 points are still being leveraged against XMPP+OMEMO as they exist in reality, and not against OMEMO alone. It doesn’t matter that OMEMO 0.7.0 is sufficient if nothing is using it, and the various implementations have their own issues. If you were to want to use XMPP+OMEMO today, you’re likely using Gajim, Dino, or Conversations, or someone you’re talking to is. These are still on 0.3, and this is a point that again is important to bring up and potentially solve. If no one is talking about the problem, it will not get solved.

    RE quantum encryption, we know what quantum computers are capable of. You’ve suddenly turned into a quantum expert so you must now know that you only really need to protect your asymmetric encryption with PQ, and you would also know that you can combine PQ and traditional algorithms together in cases where you don’t want to degrade existing security if there are flaws in the newer PQ algorithm. This would be the “serious” response, and “serious” software like Signal agrees.



  • The author of the article is a professional cryptographer with a long history of writing human-readable articles on serious cryptographic subjects. I think it would be polite to give them the benefit of the doubt and assume that they are not being a hater for the fun of it, especially when they’ve shown their work.

    Cryptography is to be taken very seriously. One implementation bug or one weak attack vector and you’re done. If you’re switching your algorithms around and not explaining why it’s very reasonable for a cryptographer to wonder what exactly you think you’re doing, and whether the implementation is in good hands. Maybe there are valid reasons for these changes, but we shouldn’t have to guess on something this important. If this article is what it takes to get clarification from the OMEMO authors on what exactly their design is, that is a positive outcome for everyone.

    If you think post-quantum is “snake oil” you clearly don’t know the first thing about cryptography, so why are you putting on a confident face here and disparaging the author instead of taking a few moments to research the topic first? Hint: pre-quantum communications can be captured and stored, to await the power of quantum computing to crack them. Post-quantum means that your conversations today remain safe tomorrow.


  • I recommend a dead man’s switch like Healthchecks.io, which can be selfhosted for free. Whenever you have something that’s regularly occurring, add an extra callout to your unique Healthchecks callout UUID as part of the automation, and Healthchecks will send you a notification if something misses its callout schedule. You can also attach whatever data (e.g. a log) to the callout so you can look back through the run history. IIRC Borg will give you a non-zero return code if it detects problems, so you can send e.g. https://hc-ping.com/your-uuid-here/$? and a non-zero code will signal a notification as well (more examples here).

    Also, Borgmatic is really easy to use for managing Borg repos. There’s a lot of configuration options (including Healthchecks.io integration) but you can delete like 90% of it for normal usecases.


  • For normal desktop users, yeah Debian Stable + Flatpaks is a winning combo for picking the software that you want to be cutting-edge and leaving the rest to rock-solid stability. Normally Linux distros keep a full ecosystem of packages that interop and depend on each other, but solutions like Flatpak have their own little microcosm of dependencies that can be used independently of the host distro. There are also Debian Backports for when you want native Debian packages that are more cutting-edge but still compiled to work with your older base system. Backports are not available for most packages but sometimes the important ones are available, like the Linux kernel itself. You can also try to compile your own backports, but you’ll be responsible for updating it.



  • I used Proxmox for a couple years and it’s good if you run a lot of VMs or LXCs, but I found that I’m not really the target audience. I ended up only running one Debian VM for my Docker containers. It was fine, but I eventually felt that Proxmox added no value for me, and the end result was sacrificing some memory and performance from using virtio emulations for CPU/GPU/RAM/filesystems. If your machines only have 8-16GB of RAM I don’t think it would be a good idea, as I’ve seen the rule of thumb is to dedicate 2GB for Proxmox’s usage, which is in addition to any guest OS’s requirements. Meanwhile I have a Debian install on a VPS that takes about 450MB of RAM.

    For me, pros:

    • Native ZFS support - invaluable, ZFS is terrific. MergerFS+SnapRAID is a decent replacement but the dodgy tooling and laundry list of footguns makes me nervous to use it on important data. ZFS is idiot-proof, as long as you know what you’re doing during the initial setup. RAIDZ expansion is coming this year and you can still use mixed-size disks in a RAIDZ as long as you accept that all disks are equivalent to the smallest one, so I personally feel ZFS is acceptable for grab-bag disk usage now
    • Separation of bare metal and server environment, which means you can spin up another server VM from scratch without impacting the previous one, then switch with zero downtime. In the end, I replaced Proxmox with Debian on ZFS root (ZFSBootMenu) and wrote a few hundred lines of bash to automate the installation, so when I switched it only took about 30 minutes of downtime start to finish.
    • Isolation of different environments. If my VM gets hacked, it will have a harder time reaching my Proxmox host etc. I run all services in isolated Docker environments anyway so this isn’t that big of a perk for my threat profile.

    Cons:

    • Partitioning RAM for ZFS ARC, Proxmox, and VM leads to inherent inefficiencies at the margins.
    • I usually give my VM n-1 CPU cores, which is still less power than if I had just used the CPU natively.
    • GPU passthroughs to VM can be less efficient, depending on the GPU and how it handles it. My iGPU is less performant when using its ~SR-IOV feature
    • Learning requirement - not a huge learning curve but it’s a lot of knowledge that I will not use now that I’ve stopped using Proxmox
    • Hosting your data pool on the Proxmox host or a dedicated data VM means that your server VM needs to use NFS to access its data, which lacks a handful of features (e.g. inotify) and is a pain
    • Need to maintain two systems for updates, downtimes, etc
    • More points of failure
    • Extra startup time
    • Run by a company that thinks it’s okay to use winrar-style nag popups every time you load the console, and requires you to manually dig through the source to disable that. I understand it’s their business model, it doesn’t change how it affects me the end user who lacks $120/year to spend on disabling a popup