Three raccoons in a trench coat. I talk politics and furries.

Other socials: https://ragdollx.carrd.co/

  • 14 Posts
  • 62 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle

  • Ragdoll X@lemmy.worldtoMemes@lemmy.mlelon is a lame poser
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 month ago

    One of the community notes on the post said it was posted the day before by another account, and an AI image detector flagged it as AI with 90% confidence.

    Musk using an AI image to fantasize about being some badass cowboy is both pathetic and absolutely expected lol




















  • You can and should read the full report here. Several experts and researchers who work in a variety of fields relevant to gender and youth healthcare reviewed hundreds of studies, commissioned several literature reviews, and conducted thousands of interviews with gender diverse children, their families and healthcare providers to reach their conclusions.

    To dismiss them as “conservatives who just hate other humans” and handwave away a 388-page report that took more than 3 years to complete by vaguely gesturing at some nondescript “statistics” would be extremely shortsighted.



  • Please tell me how an AI model can distinguish between “inspiration” and plagiarism then.

    […] they just spit out something that it “thinks” is the best match for the prompt based on its training data and thus could not make this distinction in order to actively avoid plagiarism.

    I’m not entirely sure what the argument is here. Artists don’t scour the internet for any image that looks like their own drawings to avoid plagiarism, and often use photos or the artwork of others as reference, but that doesn’t mean they’re plagiarizing.

    Plagiarism is about passing off someone else’s work as your own, and image-generation models are trained with the intent to generalize - that is, being able to generate things it’s never seen before, not just copy, which is why we’re able to create an image of an astronaut riding a horse even though that’s something the model obviously would’ve never seen, and why we’re able to teach the models new concepts with methods like textual inversion or Dreambooth.







  • I know the second definition was proposed by OpenAI, who obviously has a vested interest in this topic, but that doesn’t mean it can’t be a useful or informative conceptualization of AGI, after all we have to set some threshold for the amount of intelligence AI needs to display and in what areas for it to be considered an AGI. Their proposal of an autonomous system that surpasses humans in economically valuable tasks is fairly reasonable, though it’s still pretty vague and very much debatable, which is why this isn’t the only definition that’s been proposed.

    Your definition is definitely more peculiar as I’ve never seen anyone else propose something like it, and it also seems to exclude humans since you’re referring to problems we can’t solve.

    The next question then is what problems specifically AI would need to solve to fit your definition, and with what accuracy. Do you mean solve any problem we can throw at it? At that point we’d be going past AGI and now we’re talking about artificial superintelligence…

    Not only has it not been proven whether LLMs will lead to AGI, it hasn’t even been proven that AGIs are possible.

    By your definition AGI doesn’t really seem possible at all. But of course, your definition isn’t how most data scientists or people in general conceptualize AGI, which is the point of my comment. It’s very difficult to put a clear-cut line on what AGI is or isn’t, which is why there are those like you who believe it will never be possible, but there are also those who argue it’s already here.

    No it can’t. If the task requires the LLM to solve a problem that hasn’t been solved before, it will fail.

    Ask an LLM to solve a problem without a known solution and it will fail.

    That’s simply not true. That’s the whole point of the concept of generalization in AI and what the few-shot and zero-shot metrics represent - LLMs solving problems represented in text with few or no prior examples by reasoning beyond what they saw in the training data. You can actually test this yourself by simply signing up to use ChatGPT since it’s free.

    Exams often are bad measures of intelligence. They typically measure your ability to consume, retain, and recall facts. LLMs are very good at that.

    So are humans. We’re also deterministic machines that output some action depending on the inputs we get through our senses, much like an LLM outputs some text depending on the inputs it received, plus as I mentioned they can reason beyond what they’ve seen in the training data.

    The ability to interact with physical objects is very clearly not a good test for general intelligence and I never claimed otherwise.

    I wasn’t accusing you of anything, I was just pointing out that there are many things we can argue require some degree of intelligence, even physical tasks. The example in the video requires understanding the instructions, the environment, and how to move the robotic arm in order to complete new instructions.


    I find LLMs and AGI interesting subjects and was hoping to have a conversation on the nuances of these topics, but it’s pretty clear that you just want to turn this into some sort of debate to “debunk” AGI, so I’ll be taking my leave.