• 0 Posts
  • 197 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle


  • Looks to be an exploit only possible because compression changes the length of the response and the data can be injected into the request and is reflected in the response. So an attacker can guess the secret byte by byte by observing a shorter response form the server.

    That seems like something not feasible to do to a storage device or anything that is encrypted at rest as it requires a server actively encrypting data the attacker has given it.

    We should be careful of seeing a problem in one very specific place and then trying to apply the same logic to everything broadly.






  • bcachefs is meant to be more reliable than btrfs - which has had issues with since it was released (especially in the early days). Though bcachefs has yet to be proven at scale that it can beat btrfs at that.

    Bcachefs also supports more features I believe - like encryption. No need for an extra layer below the filesystem to get the benefits of encryption. Much like compression that also happens on both btrfs and bcachefs.

    Btrfs also has issues with certain raid configurations, I don’t think it yet has support for raid 5/6 like setup and it has promised that for - um, well maybe a decade already? and I still have not heard any signs of it making any progress on that front. Though bcachefs also still has this on their wishlist - but I see more hope for them getting it before btrfs which seems to have given up on that feature.

    Bcachefs also claims to have a cleaner codebase than btrfs.

    Though bcachefs is still very new so we will see how true some of its claims will end up being. But if true it does seem like the more interesting filesystem overall.




  • For me, I like the idea of a tiling window manager with batteries included. Been using tiling window mangers for ages now and cannot go back to floating window management. But all the tiling window managers are bare bones and configure everything you want from the ground up. Which I am not a huge fan of these days. I want something to work out the box with first party full tiling support (not just dragging windows to the side) but without needing 100s of lines of config to get a half decent setup.


  • nous@programming.devtoLinux@lemmy.mlI'm excited for Cosmic Desktop
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    3 days ago

    There are basically two different versions of Cosmic. The current one which is basically just an extension for Gnome. This is what has shipped with PopOS and currently still done.

    But system76 had a vision for what they wanted and they did not feel building that as an extension was sustainable long term. They had a bunch of stability issues (ie gnome breaking things in newer versions they were using). So they decided to write a new desktop environment from scratch in rust that they had full control over.

    I believe that the new Cosmic sits somewhere in between KDE and Gnome in terms of customization - or at least what they are aiming for. No where near the level of settings as KDE but not trying to remove every option like Gnome.

    And being a new project written from scratch it is forward focused - and only support wayland.

    You can read more about their decisions in a recent blog post: https://blog.system76.com/post/cosmic-team-interview-byoux



  • nous@programming.devtoLinux@lemmy.mlIn praise of Linux.
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    1
    ·
    6 days ago

    uptime of 840 days

    This always makes me wince. I don’t think high uptimes should be celebrated. Has your kernel ever been patched or the services running restarted? Just installing the updates is not enough to secure your system you need to be running that new code as well.

    Also, I get very nervous about touching those systems. You have no clue what state it is in. I have seen far too many large uptime server have their power go some day and are never able to boot again or don’t boot all the services back up as someone forgot to enable the service.

    Nop, rather see them rebooted regularly at a non critical time so we know they will come back up. Or even better have a HA setup.


  • Given that I update daily, I feel that the quick connection to the server to test it’s bandwidth at boot is rather insignificant.

    But it is not just a quick connection. Speed tests, in order to be accurate, need to download a reasonable amount from each server. This is why:

    it takes quite a while to sort through 200 mirrors.

    Have there been any credible studies that have looked at the reliability of the mirrors? The reliability would give one an idea on how often they should refresh their mirrors.

    You dont need one. If a mirror becomes unreliable then you can run reflector again to fix the issue. There is no need to constantly run it. And you dont need to be on the absolute fastest mirror every day. You will never notice the difference between the fastest one yesterday and the fastest one today - assuming there are no major problems with it. And if there are that is when you run reflector again.

    And reflector already comes with a weekly timer and service that is plenty often enough.


  • Yeah, std can never break backwards compatibility. So any big thing that gets added needs to be sure it wont ever change. Something like tokio is far too large for that and already does not fit all use cases.

    What I want to see is more support for interop between the different run times by providing standard interfaces for things between the various runtimes. For instance being able to spawn a task in for the runtime to take care of. You cannot do that without knowing which runtime you are using ATM. Which is highly anoying for developing libraries that need to do this. And that is only one of the many problems that could be solved in the std lib without needing to bring in the whole runtime - just create common interfaces we can use that can be implemented by each runtime,



  • This does not work for everyone. A lot of people will try to switch, but find one tool they are used to they cannot now use and are not used to the alternatives so feel frustrated when trying to use them for real work. Then get pissed off at Linux and switch back to windows.

    This advice is more for people that are thinking about Linux but have some professional or semi professional or hobby workflow on their computers that they need to be productive in. It can be very hard for them to switch os and tooling they are used to with no way to fall back to what they know when they need to.

    You will find most people don’t rely on these tools and they can doba quick check and decide to switch straight away. But ignoring this advice for the rest can make transitioning to Linux easier.

    We need to stop pretending that switching tools that you rely on and have spent decades learning to be proficient in is a trivial task for everyone.


  • Tldr; their flagship goals are:

    2024 edition: (1) supporting -> impl Trait and async fn in traits by aligning capture behavior; (2) permitting (async) generators to be added in the future by reserving the gen keyword; and (3) altering fallback for the ! type.

    Async: support for async closures and Send bounds.

    Rust in the Linux kernel: focus on the unstable features it uses so it can progress out of the experimental phase.

    And highlights other goals:

    • Stabilize cargo-script
    • Improving Rust’s borrow checker to support conditional returns and other patterns
    • Move parallel front end closer to stability
    • Ergonomic ref counting
    • Implementing “merged doctests”

    With a link to a list of 23 other goals


  • This is an absolute terrible post :/ I cannot believe he thinks that is a good argument at all. It basically boils down to:

    Here is a new feature modern languages are starting to adopt.

    You might thing that is a good thing. Lists various reasonable reasons it might be a good thing.

    The question is: Whose job is it to manage that risk? Is it the language’s job? Or is it the programmer’s job?

    And then moves on to the next thing in the same pattern. He lists loads of reasonable reasons you might want the feature gives no reasons you would not want it and but says everything in a way to lead you into thinking you are wrong to think you want these new features while his only true arguments are why you do want them…

    It makes no sense.