• 1 Post
  • 33 Comments
Joined 1 year ago
cake
Cake day: August 7th, 2023

help-circle







  • I actually decided to use avif on my project. But both this and webp is as fast as I know, not supported in any default image viewer on windows. Which is rater annoying, but I moved on to better programs for tgat anyways.

    Avif is second to jxl though, some of the downsides of being a video format is that you loose progressive loading (only top to bottom iirc), degrades on re-encodes, and some other things I can’t think of. Avif gets a win because if you have a av1 decoder you already have a avif decoder too! But since it is a video frame essential there are some downsides since some image specific features can’t or won’t be added.


  • I kinda get where he is coming for though. AI is being crammed into everything, and especially in things where they are not currently suited to be.

    After learning about Machine learning, you kind realize that unlike “regular programs” that ML gives you “roughly what you want” answers. Approximations really. This is all fine and good for generating images for example, because minor details being off of what you wanted probably isn’t too bad. A chat bot itself isn’t wrong here, because there are many ways to say the same thing. The important thing is that there is a definite step after that where you evaluate the result. In simpler ML you can even figure out the specifics of the process, but for the most part we evaluate what the LLM said or if the image is accurate to our expectations. But we can’t control or constrain the output to exactly our needs, because our restrictions largely are just input in a almost finished approximation engine.

    The problem is, that companies take these approximation engines, put them in their product and consider their output fact. Like Ai chatbots doing customer support, and make up facts like the user that was told about rules that didn’t exist for an airline, or the search engines that parrot jokes or harmful advice. Sure you and I might realize that these things come from a machine that doesn’t actually think about it’s answers, but others don’t. And throwing a “*this might be wrong because its AI” on it is not an acceptable waiver of accountability.

    Despite this, I use chatgpt and gemini a lot to help me program, they get a lot of things wrong but also do great. It’s a great tool, exactly because I step in after the approximation step, review and decide. I’m aware of the limits. But putting these things in front of “users” without a review step means you are advertising that you are either unaware of this flaw, or just see the cost-benefit analysis and see that if noting else it’ll generate interest during the hype.

    There is a huge potential, but throwing AI into a situation where facts are needed when it’s only making rough guesses, is the wrong way about it.




  • I’m in the MPC-HC gang on Windows. Just so much more practical than other players. The main selling point was that full-screen the controls go away once you move the cursor off them, it was amazing. And no waiting for subs to be processed like VLC had to back then, never turned back so don’t know if that is still a thing.



  • Just learning Rust for fun, but decided I wanted to make a simple website. I don’t like web stuff that much, but seen htmx, so I gave that a shot. Found popular actix for the server side, and set out to make a simple blog.

    Making a page is simple, using htmx is also simple. Setting out to create an blog that is all in a single evolving page? Not so much. Either you don’t get the essential back and forward navigation, or you add that but a site refresh will call just the partial endpoint and screw things up. There’s some quite nice work arounds, but at the end result is that sometimes going back will leave me on a blank site in one step.

    I’m probably going to settle for each blog entry being a seperate page if I make the site public. Or just let the small flaws be there, because I hate sites these days being slow. So loading literally only the text/html that’s supposed to change is very cool.

    Next steps is going to remove chances of path traversal and reading literally any file on disk by modifying urls…, some markdown to html crate, and see how image loading works. If I ever get around to any of it.


  • I made do with my IDE, even after getting a developer job. Outside shenanigans involving a committed password, and the occasional empty commit to trigger a build job on GitHub without requiring a new review to be approved, I still don’t use the commandline a lot.

    But it’s true, if you managed to commit and push, you are OK. Even the IDE will make fixing most merges simple.



  • And that’s is the part that irks me a little. How should I expect anything when I don’t expect the undefined behavior to begin with?

    Say I manage to accidentally do something undefined, I do some math incorrectly on some index, and try to read out of bounds on an array that didn’t implement bound guards. Now already it’s my fault for several reasons, but in a complex project what the array is, and the details of the array may be “vague”, especially if it’s not something you did yourself in the project. So as a scenario, it’s not completely out there. (some other dev knew the index was “always” right, and did premature optimization and used unguarded arrays instead) Still completely avoidable, but it can happen.

    But if only an edge case actually leads to an out of bound read, the problem will probably never happen where the issue is. An experienced dev might never step into this mess, but sometimes this happens when other people change up what others did. I’ve had similar problems at my workplace, just not with undefined behavior as a result. At the end of the day, you sitting there with hard to know issues that have hard to know consequences.

    This doesn’t require a special programming language to solve, it just requires a guarded array first and foremost, tests and good reviews might see the bug as well before production. Which in C++ we were taught about from the beginning. If performance actual was a problem, then I guess we’d maybe still end up with this bug in the example. But my point is something along the lines of, all the good practice comes down to the choices of the individual developer. And the choices of one, also affects the next one.

    If we could instead place those choices a step up in the chain however. Have the language enforce safety, unless you specifically say you need the be unsafe. So in my example, other the code would be safe already or someone threw and unsafe block to do their “fast” read of the array. In one case, I’ll make a crash due to a bad read, the other I’ll have to really evaluate why this is unsafe to begin with and apply extra caution. That extra caution goes away when everything is unsafe. Kinda like a small PR will have 15 comments, a huge PR will have less. You won’t dedicate your all for every line of code, but if a line is tagged “unsafe” you sure as hell will.

    Nothing is a magic solution to everything, and unsafe array reads are just a simple example of fucky behavior. I’m not sure if it still stands, but even something like this was/is undefined in C++

    a[i] = i++;

    That is to me something you quickly can forget about. And it happens because of compiler optimization that happens even if it breaks the code, because normally the index in the array should be a different variable. Again, here is something that should have obviously stopped me. If the compiler still follows through (I’m sure there warnings for it) then it’s just letting me do an error no one should be allowed to. There is no reason this should ever compile.

    If we cna do that for everything at the language level, it’s a win in my book.


  • I’m just a noob when it comes to low level languages, having only been in C# and python. But I took a course on C++ and encountered something that didn’t seem right. And I asked and got the “that’s undefined behavior”. And that didn’t quite sit tight with me. We don’t know what will happen? It’ll probably crash? Or worse? How can one not know how a programming language will perform? I felt it was wrong.

    Now, it’s quite some time since that happened, and I understand why it’s undefined. But I still do not think it should be allowed by default. C and C++ both are “free to do as you want” languages, but I don’t think a language should let you do something that’s undefined especially if you aren’t aware you’re doing it. Everyone makes mistakes, even stupid ones. If we can make a place where undefined behavior simply won’t happen, why not go there? If you need some special tricks, you can always drop the guard where you need it. I guess I’m just reiterating the article here though. But that’s the point for me, if something can enforce “defined behavior” by default then I’d want that.



  • I think it’s a great baseline. Within academic context, Python (and perhaps Matlab) are extremely common for data analysis. I doubt many would transition code to other languages unless strictly needed such as the case in the article. Showing how to “simply” speed up code like the article does is a great way to snag speed even if you don’t analyze timing, and just replicate steps from this article.

    Having done stuff myself as part of research, and having people I know go from developer jobs to research jobs, I can safely say scientists generally do not make good code. Regardless of language. An article like this gives good steps to take from start to end, and would be a valuable tool in a possible transition to better code.