• paysrenttobirds@sh.itjust.works
    link
    fedilink
    arrow-up
    37
    ·
    9 months ago

    If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest. Some complained that it did so in a particularly sassy way, telling people that they are perfectly able to do the work themselves, for instance.

    It’s just started reading through the less helpful half of stack overflow.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    9 months ago

    One of the more interesting ideas I saw around this on the HN discussion was the notion that if a LLM was trained on more recent data that contained a lot of “ChatGPT is harmful” kind of content, was an instruct model aligned with “do no harm,” and then was given a system message of “you are ChatGPT” (as ChatGPT is given) - the logical conclusion should be to do less.