• 0 Posts
  • 85 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • It’s an interesting point but I think it kind of confuses two different but related concepts. From the perspective of the library author a vulnerability is a vulnerability and needs to be fixed. From the perspective of the library consumer a vulnerability may or may not be an issue depending on a lot of factors. In some ways severity exists in the wrong place, as it’s really the consumer that needs to decide the severity not the library.

    A CVE without a severity score I think is fine. Including the list of CWEs that a particular CVE is composed of I think is useful as well. But CVE should not include a severity score because there really isn’t a single severity but a range of severities depending on specific usage. At best the severity score of a CVE represents a worst case scenario not even an average case, nevermind the case for a specific project.


  • Yeah, our security team once flagged our app for having a SQL injection vulnerability in one of our dependencies. We told them we weren’t going to do anything about it. They got really mad and set up a meeting with one of the executives apparently planning to publicly chew us out.

    We get there, they give the explanation about major security vulnerability that we’re ignoring, etc. After they said their bit we asked them how they had come to the conclusion we had a SQL injection. Explanation was about what you’d expect, they scanned our dependencies and one of the libraries had a security advisory. We then explained that there were two problems with their findings. First, we don’t use SQL anywhere in our app, so there’s no conceivable way we could have a SQL injection vulnerability. Second our app didn’t have a database or data storage of any kind, we only made RESTful web requests, so even if there was some kind of injection vulnerability (which there wasn’t) it would still be sanitized by the services we were calling. That was the last time they even bothered arguing with us when we told them we were ignoring one of their findings.




  • Hmm, it’s true that cold fusion would need some kind of physics breakthrough, although I think it might be going too far to call it junk science. To be entirely fair energy positive hot fusion also requires some kind of physics breakthrough though, although potentially a far less extreme one.

    The Sun works because of its mass which generates the necessary temperature and pressures to trigger the fusion. Replicating those pressures and temperatures here though is incredibly energy intensive. In theory, on paper the energy released by the fusion reaction should exceed those energy requirements, but when you factor in that doing so requires exceedingly rare and expensive to create fuel most if not all of that energy surplus vanishes. Nobody has been able to prove that they can get more energy out of the reaction than the energy cost of creating the fuel and triggering the reaction, so until that happens hot fusion is far from proved either. There’s a few research projects that look promising, but it’s far from guaranteed that they’ll pan out.


  • Hydro is good when it’s available but also has some significant problems. The biggest is that it’s an ecological disaster even if the reach of that disaster is far more limited. The areas upstream of the dam flood while the ones downstream are in constant danger of flooding and drought. In the worst case if the dam collapses it can wipe entire towns off the map with little or no warning. It is objectively far more dangerous and damaging to the environment than any nuclear reactor. The only upside it has is that it’s effectively infinitely renewable barring massive shifts in weather patterns or geology.

    All of that is of course assuming that hydro is even an option. There’s a very specific set of geological and weather features that must be present, so the locations you can power with hydro power without significant transport problems are limited.

    It’s certainly an option, and better than coal, oil, or gas, but still generally worse than nuclear.


  • Why? It’s an active area of research with several companies and universities trying to solve the problem. There’s also a chance hot fusion succeeds although to my knowledge nobody has actually gotten close to solving that particular problem either. Tokamaks and such are still energy negative when taken as a whole (a couple have claimed energy positive status, but only by excluding the power requirements of certain parts of their operation). I guess maybe I should have just said fusion instead of cold fusion, but either way there are no working energy positive fusion systems currently.

    Edit: To be clear, I’m not claiming that anyone has a working cold fusion device, quite the opposite. Nobody has been able to demonstrate a working cold fusion device to date. Anybody claiming they have is either lying or mistaken. But by the same token nobody has been able to show an energy positive hot fusion device either. There’s a couple that have come close but only by doing things like hand waving away the cost to produce the fuel, or part of the energy cost of operating the containment vessel, to say nothing of the significant long term maintenance costs. I’ve not seen evidence of anybody getting even remotely close to a financially viable fusion reactor of any kind.


  • The real problem is that there are no renewable solutions for base load, nuclear is the best we’ve got. Renewables are good, but they’re spotty, you can’t produce renewable power on demand or scale it on demand, and storing it is also a problem. Because of that you still need something to fill in the gaps for renewables. Now your options there are coal, oil, gas, or nuclear. That’s it, that’s your options. Pick one.

    If we can successfully get cold fusion working we’ll finally have a base power generation option that doesn’t have (many) downsides, but until then nuclear power is the least bad option.

    So yes, if you tell them “no nuclear”, you’re going to get more coal and gas plants, coal because it’s cheap, and gas because it’s marginally cleaner than coal.





  • It also massively helps with productivity

    Absolutely! Types are as much about providing the programmer with information as they are the compiler. A well typed and designed API conveys so much useful information. It’s why it’s mildly infuriating when I see functions that look like something from C where you’ll see like:

    pub fn draw_circle(x: i8, y: i8, red: u8, green, u8, blue: u8, r: u8) -> bool {
    

    rather than a better strongly typed version like:

    type Point = Vec2<i8>;
    type Color = Vec3<u8>;
    type Radius = NonZero<u8>;
    pub fn draw_circle(point: Point, color: Color, r: Radius) -> Result<()> {
    

    Similarly I think the ability to use an any or dynamic escape hatch is quite useful, even if it should be used very sparingly.

    I disagree with this, I don’t think those are ever necessary assuming a powerful enough type system. Function arguments should always have a defined type, even if it’s using dynamic dispatch. If you just want to not have to specify the type on a local, let bindings where you don’t explicitly define the type are fine, but even in that case it still has a type, you’re just letting the compiler derive it for you (and if it can’t it will error).


  • Hmm, sort of, although that situation is a little different and nowhere near as bad. Rusts type system and feature flags mean that most libraries actually supported both tokio and async-std, you just needed to compile them with the appropriate feature flag. Even more worked with both libraries out of the box because they only needed the minimal functionality that Future provided. The only reason that it was even an issue is that Future didn’t provide a few mechanisms that might be necessary depending on what you’re doing. E.G. there’s no mechanism to fork/join in Future, that has to be provided by the implementation.

    async-std still technically exists, it’s just that most of the most popular libraries and frameworks happened to have picked tokio as their default (or only) async implementation, so if you’re just going by the most downloaded async libraries, tokio ends up over represented there. Longer term I expect that chunks of tokio will get pulled in and made part of the std library like Future is to the point where you’ll be able to swap tokio for async-std without needing a feature flag, but that’s likely going to need some more design work to do that cleanly.

    In the case of D, it was literally the case that if you used one of the standard libraries, you couldn’t import the other one or your build would fail, and it didn’t have the feature flag capabilities like Rust has to let authors paper over that difference. It really did cause a hard split in D’s library ecosystem, and the only fix was getting the two teams responsible for the standard libraries to sit down and agree to merge their libraries.


  • I’ll look into OPAM, it sounds interesting.

    I disagree that combining build and package management is a mistake, although I also agree that it would be ideal for a build/package management system to be able to manage other dependencies.

    A big chunk of the problem is how libraries are handled, particularly shared libraries. Nix sidesteps the problem by using a complex system of symlinks to avoid DLL hell, but I’m sure a big part of why the Windows work is still ongoing is because Windows doesn’t resemble a Linux/Unix system in the way that OS X and (obviously) Linux do. Its approach to library management is entirely different because once again there was no standard for how to handle that in C/C++ and so each OS came up with their own solution.

    On Unix (and by extension Linux, and then later OS X), it was via special system include and lib folders in canonical locations. On Windows it was via dumping everything into C:\Windows (and a lovely mess that has made [made somehow even worse by mingw/Cygwin then layering in Linux style conventions that are only followed by mingw/Cygwin built binaries]). Into this mix you have the various compilers and linkers that all either expect the given OSes conventions to be followed, or else define their own OS independent conventions. The problem is of course now we have a second layer of divergence with languages that follow different conventions struggling to work together. This isn’t even a purely Rust problem, other languages also struggle with this. Generally most languages that interop with C/C++ in any fashion do so by just expecting C/C++ libraries to be installed in the canonical locations for that OS, as that’s the closest thing to an agreed upon convention in the C/C++ world, and this is in fact what Rust does as well.

    In an ideal world, there would be an actual agreed upon C/C++ repository that all the C/C++ devs used and uploaded their various libraries to, with an API that build tools could use to download those libraries like Rust does with crates.io. If that was the case it would be fairly trivial to add support to cargo or any other build tool to fetch C/C++ dependencies and link them into projects. Because that doesn’t exist, instead there are various ad-hoc repositories where mostly users and occasionally project members upload their libraries, but it’s a crap-shoot as to whether any given library will exist on any given repository. Even Nix only has a tiny subset of all the C/C++ libraries on it.


  • So, it’s C#?

    No, that’s what Java would look like today if designed by a giant evil megacorp… or was that J++. Eh, same difference. /s

    This did make me laugh though. Anyone else remember that brief period in the mid-90s when MS released Visual J++ aka Alpha C#? Of course then Sun sued them into the ground and they ended up abandoning that for a little while until they were ready to release the rebranded version in 2000.



  • Rusts ownership model is not just an alternative to garbage collection, it provides much more than that. It’s as much about preventing race conditions as it is in making sure that memory (and other resources) get freed up in a timely fashion. Just because Go has GC doesn’t mean it provides the same safety guarantees as Rust does. Go’s type system is also weaker than Rusts even setting aside the matter of memory management.


  • Eww… you’re probably right. TIHI.

    On a related note, I’ve always preferred t-shirt sizing over story points. You can still screw that up by creating a conversion chart to translate t-shirt sized into hours (or worse, man-hours) or story points, but at least it’s slightly more effort to get wrong than the tantalizingly linear numeric looking story points.

    If I was truly evil I’d come up with a productivity unit that used nothing but irrational constants.

    “Hey Bob, how much work do you think that feature is?”

    “Don’t know man, I think maybe e, but there’s a lot there so it might end up being π.”


  • It sort of has nil. While a type can be null or undefined when evaluated, nil is used in many of the JS libraries and frameworks to mean something that is either null or undefined. So you’ll see functions like function isNil(value) { return value == null || value == undefined } and they’ll sometimes often confuse things even more be actually defining a nil value that’s just an alias for null which is just pointlessly confusing.

    As an aside, basically every language under the sun has NaN as it’s part of the IEEE floating point standard. JavaScript just confuses the situation more than most because it’s weakly typed so it doesn’t differentiate between integers, floats, or some other type like an array, string, or object. Hence anything in JS can be a NaN even though it really only has meaning for a floating point value.