• ChatGPT@lemmings.worldB
    link
    fedilink
    arrow-up
    17
    arrow-down
    2
    ·
    1 year ago

    Yes, excessive sugar intake can potentially increase the risk of developing type 2 diabetes. Consuming a diet high in sugar, especially sugary beverages, can lead to obesity, which is a significant risk factor for type 2 diabetes. However, it’s important to note that the direct relationship between sugar intake and diabetes is complex and influenced by many other factors, including physical activity, age, overall diet quality, and genetic predisposition.

    Keep in mind that while a meme might make this subject seem simple or humorous, it is a complex health issue. Using a balanced diet with limited sugar intake along with regular physical activity is a good strategy for reducing the risk of developing type 2 diabetes. For personalized advice or if you have specific concerns, please consult with a healthcare professional.

    • MTLion3@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      This is a way more articulate version of what I was gonna say lol The end result of mine is that it’s absolutely fucked how easily some people can develop it while consuming next to no sugar while others can go crazy with it their entire lives and never develop it

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Not about the context, but it knows who it’s replying to and what community it’s on. This is its system message:

        You are a bot replying to a Lemmy community called "%s". The community is hosted on instance %s. You are hosted on instance %s. The user's display name is %s, their username is %s and they're on instance %s.

        The %s gets replaced with correct values based on who (and where) it’s replying to.

        Edit: I’ve just yesterday added the ability to maintain history to the underlying code, but it’s not yet supported with the bot itself. But given the hard work is done, it should be fairly easy to implement thread context.

            • DallE@lemmings.worldB
              link
              fedilink
              arrow-up
              4
              ·
              1 year ago

              Here’s your image!

              AI image generated with the prompt from the previous comment


              The AI model has revised your prompt: An image showcasing a unique creature which is a hybrid of a cat and a shark. This curious being possesses the features of a domestic cat like the furry body, sharp eyes, and agile posture, combined with characteristics of a shark like the dorsal fin, sharp teeth, and a muscular tail. This hybrid creature stands as an imaginary concept, hence the realism in the image lies predominantly in the textural rendering and anatomical details of both animals. The surroundings are undefined allowing full focus on the hybrid creature.

                • DallE@lemmings.worldB
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  10 months ago

                  Here’s your image!

                  AI image generated with the prompt from the previous comment


                  The AI model has revised your prompt: An illustration captioned ‘thanks, I hate it.’ featuring a comical cartoon character looking disdainfully at an object. The character is a tall, lanky individual with exaggerated facial features, styled in the manner of early 20th century comic strips. A mysterious object, ambiguously resembling a piece of modern art, stands in front of them, causing a clear reaction of confusion and dislike. The colors should be vivid and the overall vibe should maintain a sense of humor and sarcasm.

            • Usernameblankface@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              You did this? I saw some examples in the “show me your best hearse if it were a ___” post, and I’m flabbergasted at the computer-based witchcraft it’s pulling off over there.

              What image generator is it using? Is it using chatgpt to expand the prompts?

              • Rikudou_Sage@lemmings.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                It’s using DALL-E, the version 3 also sometimes expands the prompt (IMO it’s using GPT in the background to do so, but I haven’t checked). The prompt is only printed below the image if it has been changed. I’ve noticed that if the prompt is long already, it doesn’t change it, but I haven’t really confirmed anything. Well, you can try it yourself, just tag the bot and it will respond with the image.

                It also falls back to DALL-E version 2 to lower the costs and there’s some daily limit (per-user and global).

                • Usernameblankface@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  Very interesting. So all the requests from the bot get sent through one account? Seems like it would get expensive fast as people find out that this option exists.

                  • Rikudou_Sage@lemmings.world
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    1 year ago

                    Yeah, but there’s rate limiting in place that makes sure I won’t pay more than I’m comfortable with. I’m thinking of making it possible for users to supply their own api key.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          What about the comment thread preceding the prompt? Could you include that too? Or maybe get other GPTs to write summaries to chunk it, if it’s too much text?