i’m lizard

  • 0 Posts
  • 22 Comments
Joined 3 months ago
cake
Cake day: June 21st, 2024

help-circle

  • Most paid certs aren’t worth much anyway. Payment and delivery info for DV certs isn’t validated by anyone, it’s literally the same concept as Let’s Encrypt. OV and EV are the only ones that theoretically have any value, but nobody is using those ever since they got rid of the URL bar labeling; even Amazon is on DV nowadays.




  • It depends on if you can feasibly implement compatibility layers for large parts of the “required” but very work-intensive drivers. FreeBSD has the same driver struggles and ended up with LinuxKPI to support AMD/Intel GPUs. I know there’s a whole bunch of toy kernels that implemented compatibility layers for parts of Linux in some fashion too.

    It’s a ton of work overall but there’s room to lift enough already existing stuff from Linux to get the ball rolling.


  • In my experience, most hangs with a message about amdgpu loading on screen are caused by an amdgpu issue of some kind. I’d check to see if amdgpu ends up being loaded correctly via lsmod | grep amdgpu and just a general journalctl -b 0 | grep amdgpu to see if there’s any obvious failures there. Chances are that even if it’s not amdgpu, the real failure is in the journal somewhere.

    Could be a wrong setting of hardware.enableRedistributableFirmware (should be true) or the new-ish hardware.amdgpu.initrd.enable (can be either really but either true or false might be more or less reliable on your system).


  • Gonna add a dissenting “maybe but not really”. YT is really aggressive on this kinda stuff lately and the situation is changing month by month. YT has multiple ways of flagging your IP as potentially problematic and as soon as you get flagged you’re going to end up having to run quite an annoying mess of scripts that may or may not last in the long term. There’s some instructions in a stickied issue on the Invidious repo.





  • Eh, no. “I’m going to make things annoying for you until you give up” is literally something already happening, Titanfall and the like suffered from it hugely. “I’m going to steal your stuff and sell it” is a tale old as time, warez CDs used to be commonplace; it’s generally avoided by giving people a way to buy your thing and giving people that bought the thing a way to access it. The situation where a third party profits off your game is more likely to happen if you don’t release server binaries! For example, the WoW private/emulator server scene had a huge problem with people hoarding scripts, backend systems and bugfixes, which is one of the reasons hosted servers could get away with fairly extreme P2W.

    And he seems to completely misunderstand what happens to IP when a studio shuts down. Whether it’s bankruptcy or a planned closure, it will get sold off just like a laptop owned by the company would and the new owner of the rights can enforce on it if they think it’s useful. Orphan works/“abandonware” can happen, just like they can to non-GaaS games and movies, but that’s a horrible failing on part of the company.





  • Personally, I do believe that rootless Docker/Podman have a strong enough security boundary for personal/individual self-hosting where you have decent trust in the software you’re running. Linux privilege escalation and container escape exploits fetch decent amounts of money on the exploit market, and nobody’s gonna waste them on some people running software ending in *arr when Zerodium will pay five figures for a local privilege escalation or container escape. If you’re running a business or you might be targeted for whatever reason (journalist or whatever) then that doesn’t apply.

    If you want more security, there are container runtimes that do cooler security stuff under the hood, like Firecracker/Kata Containers implementing a managed VM, or Google’s gVisor which very strongly intercepts kernel syscalls and essentially reimplements Linux in userspace. Those are used by AWS and Google Cloud respectively. You can integrate those into Docker, though not all networking/etc options are supported.


  • My suggestion is to use system management tools like Foreman. It has a “content views” mechanism that can do more or less what you want. There’s a bunch of other tools like that along the lines of Uyuni. Of course, those tools have a lot of features, so it might be overkill for your case, but a lot of those features will probably end up useful anyway if you have that many hosts.

    With the way Debian/Ubuntu APT repos are set up, if you take a copy of /dists/$DISTRO_VERSION as downloaded from a mirror at any given moment and serve it to a particular server, that’s going to end up with apt update && apt upgrade installing those identical versions, provided that the actual package files in /pool are still available. You can set up caching proxies for that.

    I remember my DIY hodgepodge a decade ago ultimately just being a daily cronjob that pulls in the current distro (let’s say bookworm) and their associated -updates and -security repos from an upstream rsync-capable mirror, then after checking a killswitch and making sure things aren’t currently on fire, it does rsync -rva tier2 tier3; rsync -rva tier1 tier2; rsync -rva upstream/bookworm tier1. Machines are configured to pull and update from tier1 (first 20%)/tier2 (second 20%)/tier3 (rest) appropriately on a regular basis. The files in /pool were served by apt-cacher-ng, but I don’t know if that’s still the cool option nowadays (you will need some kind of local caching for those as old files may disappear without notice).


  • For that card, you probably have to set the radeon.si_support=0 amdgpu.si_support=1 kernel options to allow amdgpu to work. I don’t have a TrueNAS system laying around so I don’t know what the idiomatic way to change them is.

    Using amdgpu on that card has been considered experimental ever since it was added like 6 years ago, and nobody has invested any real efforts to stabilize it. It’s entirely possible that amdgpu on that card is simply never gonna work. But yeah I think the radeon driver isn’t really fully functional anymore either, so I guess it’s worth a shot…


  • Company offering new-age antivirus solutions, which is to say that instead of being mostly signature-based, it tries to look at application behavior instead. If Word was exploited because some user opened not_a_virus_please_open.docx from their spam folder, Word might be exploited and end up running some malware that tries to encrypt the entire drive. It’s supposed to sniff out that 1. Word normally opens and saves like one document at a time and 2. some unknown program is being overly active. And so it should stop that and ring some very loud alarm bells at the IT department.

    Basically they doubled down on the heuristics-based detection and by that, they claim to be able to recognize and stop all kinds of new malware that they haven’t seen yet. My experience is that they’re always the outlier on the top-end of false positives in business AV tests (eg AV-Comparatives Q2 2024) and their advantage has mostly disappeared since every AV has implemented that kind of behavior-based detection nowadays.