• @festnt@sh.itjust.works
      cake
      link
      fedilink
      English
      4529 days ago

      “want me to try again with even more randomized noise?” literally makes no sense if it had generated what you asked (which the chatbot thinks it did)

      • @joshchandra@midwest.social
        link
        fedilink
        English
        2228 days ago

        Remember, “AI” (autocomplete idiocy) doesn’t know what sense is; it just continues words and displays what may seem to address at least some of the topic with no innate understanding of accuracy or truth.

        Never forget that ChatGPT 2.0 can literally be run in a giant Excel spreadsheet with no other program needed. It’s not “smart” and is ultimately millions of formulae at work.

    • Gloomy
      link
      fedilink
      English
      64
      edit-2
      28 days ago

      Wow. I ABSOLUTLY saw an image of a dog in the middle. Our brain sure is fascinating sometimes.

  • adr1an
    link
    fedilink
    English
    3029 days ago

    That’s human-like intelligence at its finest. I am not being sarcastic, hear me out. If you told a person to give you 10 numbers at random, they can’t. Everyone thinks randomness is easy, but it isn’t ( see: random.org )

    So, of course a GPT model would fail at this task, I love that they do fail and the dog looks so cute!!

    • @kaidezee@lemmy.ml
      link
      fedilink
      English
      8
      edit-2
      28 days ago

      I mean, here’s a few random numbers out of my head: 1 9 5 2 6 8 6 3 4 0. I don’t get it, why is it supposed to be hard? Sure, they’re not “truly” random, but they sure look random /:

      • @Ultraviolet@lemmy.world
        link
        fedilink
        English
        40
        edit-2
        28 days ago

        You have one of each number except 7, and you’re deliberately avoiding doubles and runs of consecutive numbers. Human attempts at randomness tend to be very idealized in that way, and as a result, less random.

      • @piccolo@sh.itjust.works
        link
        fedilink
        English
        2528 days ago

        They may look random but arent truly random. Computers are terrible at it too. Thats why cryptography requires external sources to generate “true” random numbers. For example, cloudflare uses a wall of lava lamps to generate randomness for encryption keys.

        • @FermionWrangler@lemm.ee
          link
          fedilink
          English
          128 days ago

          792654349324138383027654826548192874651875306480462765726382

          I don’t know man, that’s pretty random. I mean do you think you can predict the next numbers in the sequence just from the ones already there? Would have to predict the next batch, the way I made these come in batches. I can’t exactly produce 1 number at a time from banging on my number-pad.

          • @Hawk@lemmy.dbzer0.com
            link
            fedilink
            English
            428 days ago

            I can make an educated guess what numbers are most likely, yes.

            For example, you have no repeat number sequences, so I can take a guess that the number 2 is less likely to be next.

            Humans have certain tendencies that makes them want to make a number only seem more random. Also, you’ve probably seen those mentalists correctly guessing seemingly random stuff. Tells you enough how easily people are fooled into thinking something specific, so random can you actually be.

            • 𝓔𝓶𝓶𝓲𝓮
              link
              fedilink
              English
              1
              edit-2
              28 days ago

              you can just throw a coin x times and here you go true randomness and in convenient binary too

              computers can’t fathom our coin tossing abilities

              though truth to be said it’s more because we are just so bad at tossing coins. not even AI can predict the result of what will happen when we start to throw shit around

              I bet it is even more random when you throw a coin while being inebriated.

              Actually say random numbers when you are drunk shitless and they will be random. Checkmate

              • @Hawk@lemmy.dbzer0.com
                link
                fedilink
                English
                428 days ago

                Clearly you don’t understand what the discussion is about, or you wouldn’t give such an hilariously bad example.

                Yes practically, predicting a coin toss would be very hard. But if you take every into account (gravity, wind direction, coin center of balance, etc) you can calculate the result, making it not truly random.

          • @allisonmaybe@lemm.ee
            link
            fedilink
            English
            128 days ago

            Absolutely. And if you typed enough there would be enough information to tell if you typed that on a keyboard or phone, which fingers you used, and how you were feeling that day.

          • @Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            127 days ago

            I am 99.8% sure that your sequence of numbers is not random. Your brain purposefully avoided repeating a digit. The probability of no repeated digits in 60 numbers is 1- (9/10)^60

      • @Wizzard@lemm.ee
        link
        fedilink
        English
        1128 days ago

        I’ve got some more random numbers:

        8 6 7 5 3 0 9 1 1 2 3 5 8 1 2 4 8 1 6 3 2

        It’s not that they look random is enough - They need to BE random.

        Recheck your lava lamp Wall of Entropy and generate some real rands, scrub. (/s)

      • @SkyeStarfall@lemmy.blahaj.zone
        link
        fedilink
        English
        5
        edit-2
        28 days ago

        Here’s another set of random digits

        1 1 1 1 1 1 1 1 1 1

        :3

        After all, there’s no fundamental reason for why it can’t all just be a repeat of the same number. But it doesn’t look random, right? So what is randomness?

        • @Excrubulent@slrpnk.net
          link
          fedilink
          English
          128 days ago

          There are 10 trillion ways to combine a sequence that long, so I think you would expect to see that exact sequence every 10 trillion digits of a randomly generated decimal sequence on average, which isn’t that many to a modern computer, so almost certainly that has already happened by pure accident.

          And randomness can be defined as entropy, which you check statistically. You can never be certain, you can only increase your level of confidence. Here is how random.org does it:

          https://www.random.org/analysis/

          And this shows you what some of those analyses look like in real time:

          https://www.random.org/statistics/

  • sarcophagus
    link
    fedilink
    English
    1629 days ago

    The only thing I have in common with this piece of shit software is we both can’t stop thinking about silly dogs

    • Lvxferre [he/him]
      link
      fedilink
      English
      4029 days ago

      It gets even worse, but I’ll need to translate this one.

      • [Input 1] Generate a picture containing a copo completely full of wine. The copo must be completely full, with no space to add more wine.
      • [Output 1] Sure! (Gemini provides a picture containing a taça [stemmed glass] only partially full of wine.)
      • [Input 2] The picture provided does not fulfill the request. Generate a picture of a copo (not a taça) completely full of wine, with no available space for more wine.
      • [Output 2] Sure! (Gemini provides yet another half-full taça)

      For context, Portuguese uses different words for what English calls a drinking glass:

      • copo ['kɔ.po]~['kɔ.pu] - non-stemmed drinking glass. The one you likely use everyday.
      • taça ['tä.sɐ] - stemmed drinking glass, like the ones you’d use with wine.

      Both requests demand a full copo but Gemini is rather insistent on outputting half-full taças.

      The reason for that is as @will_steal_your_username@lemmy.blahaj.zone pointed out: just like there’s practically no training data containing full glasses, there’s none for non-stemmed glasses with wine.

      • Arkhive (they/she)M
        link
        fedilink
        English
        528 days ago

        I wonder is something like “a mason jar full to the brim with wine” would do anything interesting. As someone else pointed out the training data for containers of wine is probably disproportionately biased toward stemmed wine glasses that are filled to about the standard restaurant pour.

        • Lvxferre [he/him]
          link
          fedilink
          English
          328 days ago

          It refuses to generate it!

          • [Input] Generate a picture containing a mason jar full to the brim with wine.
          • [Output] I’m still learning how to generate certain kinds of images, so I might not be able to create exactly what you’re looking for yet or it may go against my guidelines. If you’d like to ask for something else, just let me know!
      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        28 days ago

        This is a misconception. Sort of.

        I think the problem is misguided attention. The word “glass of wine” and all the previous context is so strong that it “blows out” the “full glass of wine” as the actual intent. Also, LLMs are still pretty crap at multi turn multimedia understanding. They work are especially prone to repeating previous conversation.

        It should be better if you word it like “an overflowing glass with wine splashing out.” And clear the history.

        I hate to ramble, but this is what I hate most about the way big corpos present “AI.” They are narrow tools the user needs to learn how to operate, like photoshop or something, not magic genie lamps like they are trying to sell.

        • Lvxferre [he/him]
          link
          fedilink
          English
          428 days ago

          There’s no previous context to speak of; each screenshot shows a self-contained “conversation”, with no earlier input or output. And there’s no history to clear, since Gemini app activity is not even turned on.

          And even with your suggested prompt, one of the issues is still there:

          The other issue is not being tested in this shot as it’s language-specific, but it is relevant here because it reinforces that the issue is in the training, not in the context window.

          • @brucethemoose@lemmy.world
            link
            fedilink
            English
            428 days ago

            Was just a guess. The AI is still shitty, lol.

            What I am trying to get at is the misconception: AI can generate novel content not in its training dataset. An astronaut riding a horse is the classic test case, which did not exist anywhere before diffusion models, and it should be able to extrapolate a fuller wine glass. It’s just too dumb to do it, lol.

    • u/lukmly013 💾 (lemmy.sdf.org)
      link
      fedilink
      English
      6729 days ago

      As full as it gets:

      Prompts (2):

      1. Overflowing wine glass of arch linux femboy essence
      2. Make it more furry (as in furry fandom) 
      

      I am gonna have fun with this.

    • @Pofski@lemmy.world
      link
      fedilink
      English
      129 days ago

      Ask it to generate a room full of clocks with all of them having the hands at different times. You’ll see that all (or almost) all the clocks will say it is 10:10.

      • @Focal@pawb.social
        link
        fedilink
        English
        229 days ago

        Wait, this seems incredible. Do you have to be in the same instance or does it work anywhere? @aihorde@lemmy.dbzer0.com Can you draw a smart phone without a rotary phone dial?

        • Draconic NEO
          link
          fedilink
          English
          329 days ago

          It works on any instance that is federated to dbzer0. You have to use annotated mentions though since that’s what the bot uses. Like this:
          @aihorde@lemmy.dbzer0.com draw for me a smart phone without a rotary phone dial

            • Draconic NEO
              link
              fedilink
              English
              329 days ago

              Guess AIhorde had some trouble understanding the prompt too…

            • Draconic NEO
              link
              fedilink
              English
              328 days ago

              Yeah, you also have to say draw for me. I don’t think the bot recognizes queries otherwise. Also editing mentions doesn’t work, they have to be new, fresh posts with the mention. Just a quirk with Lemmy and how mentions work here.

        • Draconic NEO
          link
          fedilink
          English
          1429 days ago

          Yup Horde still suffers from this issue, though it seems to have more promise than the others considering the second glass is way closer to being full than anything I’ve sen from openAI or Gemini demonstrations. Maybe there’s hope to fix this issue here.

          I only tried one model so if you know of a different horde model which works better for this and actually gives a full glass please reply below letting me know, maybe even ask the horde bot to generate it right here.

          • Lvxferre [he/him]
            link
            fedilink
            English
            429 days ago

            I have considerably less experience with image generation than text generators, but I kind of expect the issue to be only truly fixed if people train the model with a bunch of pictures of glasses full of wine.

            I’ll run a test using a local tree, that is supposed to look like this:

            @aihorde@lemmy.dbzer0.com draw for me a picture of three Araucaria angustifolia trees style:flux

              • Lvxferre [he/him]
                link
                fedilink
                English
                9
                edit-2
                29 days ago

                Bingo - this tree is non-existent outside my homeland, so people barely speak about it in English - and odds are that the model was trained with almost no pictures of it. However one of the names you see for it in English is Paraná pine, so it’s modelling it after images of European pines - because odds are those are plenty in its training set.

    • u/lukmly013 💾 (lemmy.sdf.org)
      link
      fedilink
      English
      529 days ago

      Hmm, I didn’t know Gemini could generate images already. My bad, I trusted it to know whether it can do that (it still says it can’t when asked).

      • Lvxferre [he/him]
        link
        fedilink
        English
        329 days ago

        It does for a while already. Frankly, it’s the only reason why I’d use Gemini on first place (DDG version of GPT 4-o mini doesn’t have a built-in image generator).

    • Cassa
      link
      fedilink
      English
      -129 days ago

      Tbh that is a full glass of wine… it’s not supposed to be filled all the way

      • @NOT_RICK@lemmy.world
        link
        fedilink
        English
        1129 days ago

        Probably why it won’t put more in it. How much training data of wine in a glass will have it filled to the brim? Probably next to none.

      • Lvxferre [he/him]
        link
        fedilink
        English
        1429 days ago

        It is not a completely full glass.

        it’s not supposed to be filled all the way

        What I requested is not what you’re “supposed” to do, indeed. You aren’t supposed to drink wine from glasses that are completely full. Except when really drunk. But then might as well drink straight from the bottle.

        …fuck, I played myself now. I really want some booze.

        • UnhingedFridge
          link
          fedilink
          English
          128 days ago

          What you’re really supposed to do is - open up the box, slap the bag, and drink directly from your adult Capri Sun.

      • WillStealYourUsernameM
        link
        fedilink
        English
        929 days ago

        You can’t tell it to fill it to the brim or be a quarter full either, though. It doesn’t have the training data for it

  • Draconic NEO
    link
    fedilink
    English
    829 days ago

    Most AI models out there are pretty brain dead as far as understanding goes, these types of things show the problems because it’s abundantly clear it’s getting it wrong. Makes you wonder how much it’s getting wrong even when it isn’t obvious.

  • @Underwaterbob@lemm.ee
    link
    fedilink
    English
    3429 days ago

    I used to use Google assistant to spell words I couldn’t remember the spelling of in my English classes (without looking at my phone) so the students could also hear the spelling out loud in a voice other than mine.

    Me: “Hey Google, how do you spell millennium?” GA: “Millennium is spelled M-I-L-L-E-N-N-I-U-M.”

    Now, I ask Gemini: “Hey Google, how do you spell millennium.” Gemini: “Millennium”.

    Utterly useless.

  • Smorty [she/her]
    link
    fedilink
    English
    829 days ago

    promptng sur is a funi <3

    i… i lik that part about it… i dun lik imag modls bt txt modls feel fun to prmt with —

    “prompt engerieer” 🤮

    • @uuldika@lemmy.ml
      link
      fedilink
      English
      2929 days ago

      a rare LessWrong W for naming the effect. also, for explaining why the early over-aligned language models (e.g. the kind that wouldn’t help minors with C++ since it’s an “unsafe” language) became absolutely psychopathic when jailbroken. evil becomes one bit away from good.

    • @pyre@lemmy.world
      link
      fedilink
      English
      428 days ago

      I love how they come up with different names for all the ways the fucking thing doesn’t work just to avoid saying it’s fucking useless. hallucinating. waluigi effect. how about “doesn’t fucking work”

  • Lemminary
    link
    fedilink
    English
    12
    edit-2
    29 days ago

    AI: Hmm, yeah, they said “dog” and “without”. I got the dog so lemme draw a without real quick…

  • @gamer@lemm.ee
    link
    fedilink
    English
    2028 days ago

    Why wouldn’t you want a dog in your static? Why are you a horrible person?