G/O Media, a major online media company that runs publications including Gizmodo, Kotaku, Quartz, Jezebel, and Deadspin, has announced that it will begin a “modest test” of AI content on its sites.

The trial will include “producing just a handful of stories for most of our sites that are basically built around lists and data,” Brown wrote. “These features aren’t replacing work currently being done by writers and editors, and we hope that over time if we get these forms of content right and produced at scale, AI will, via search and promotion, help us grow our audience.”

  • Meloku@feddit.cl
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    What all these trend chasing CEOs fail to grasp about ChatGPT is that the Neural Network is trained to return what looks like like a human written answer, but it is NOT, IN ANY CASE, GOING TO RETURN INFORMATION. If you ask ChatGPT to write an essay with sources, ChatGPT is going to write a somewhat coherent essay with what looks like sources, but it’s going to be a crapshot if the sources are even real, because you asked for an essay with sources, not an essay USING any given source. Anyways, I’m going to heat some popcorn and wait for the inevitable fake articles and the associated debacle.

    • count_duckula@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      ChatGPT is an engineering marvel in that it has understood the semantics of language. However, it has absolutely no idea what it is talking about beyond generating the next token in a string of what sounds like natural language. I wish more people would understand this nuance.

    • TMoney@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      If you’ve seen articles most news agencies put out besides a select, high-quality few, I doubt anyone will be able to tell the difference. They’re all shit. They’re probably just switching from mechanical turk to this.

  • mint@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    so many of the replies to this post are so embarrassing lol. just because you can’t tell the difference between good and bad writing doesn’t mean that real writers should be replaced by bots that literally produce a facsimile of written content, as opposed to content with a point, which an LLM can’t actually do.

    like you think you’re looking at bad writing, but when the robots start churning out shit articles you’ll realize that you’d rather have a human than this nonsense

  • Fizz@lemmy.nz
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I doubt we will even notice the quality drop. Those sites have been pumping out pure trash for years.

  • sarsaparilyptus@lemmy.fmhy.ml
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    All these AIs are going to produce is incoherent, poorly written, unresearched puff pieces abaut topic-adjacent garbage. In other words, we won’t even be able to tell when the switch happens.

  • ConsciousCode@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    As someone working on LLM-based stuff, this is a terrible idea with current models and techniques unless they have a dedicated team of human editors to make sure the AI doesn’t go off the rails, to say nothing of the cruelty of firing people to save maybe a few hundred thousand dollars with a substantial drop in quality. They can be very smart with proper prompting, but are also inconsistent and require a lot of handholding for anything requiring executive function or deliberation (like… writing an article meant to make a point). It might be possible with current models, but the field is way too new and techniques too crude to make this work without a few million dollars in R&D, at which point it’ll probably be completely wasted when new developments come out nearly every week anyway.

    • Sina@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      but the field is way too new and techniques too crude to make this work without a few million dollars in R&D

      I think AI is evolving so rapidly that by the time they get anywhere with this on Gizmodo the hand holding might not be nearly as necessary.

      • ConsciousCode@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        It’s hard to say. My intuition is that LLMs themselves aren’t capable of this until some trillion-parameter neural phase transition maybe, and more focus needs to be put on the cognitive architectures surrounding them. Basically, automated hand-holding so they don’t forget what they’re supposed to be doing, the equivalent of the brain’s own feedback loop.

        The main issue is executive function is such a weak signal in the data that it would probably have to reach ASI before it starts optimizing for it, so you either need specialized RL or algorithmic task prioritization.

  • sub_o@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Gaming press has been in downward spiral for quite sometime, so this is expected. Even before and without AI replacing them, there have been quite a number of staff cuts recent years, and they have been employing underpaid freelancers for a long time (I remember IGN boasting about paying freelancers above market price, but it is still barely enough for living).

    Eventually these companies with AI generated articles will also disappear, not only because the AI is trained on clickbaity data, but also the current trend is people getting their previews / reviews from (sponsored 🤢) youtubers, twitch streamers, or worse tiktokers (how do you fit a review in 30 seconds video?)

    And IIRC Jeff Gerstmann mentioned in his podcast that there are some who get into gaming press, as a stepping stone to gaming industry itself, so maybe they will get a job there? I mean many of the older gaming journalists are either taking rotational turn occupying senior positions in IGN, Gamespot, Polygon, Kotaku, or just straight up work for game companies as creative director or PR. So far it seems to work out not so bad for them, except for those in junior / freelancer positions.

    Thankfully investigative journalism and long form insight writings are still untouched by AI shenanigans. Beilingcat and ICIJ still writing great stuff. But I have no idea how long it’d last.

  • TwoGems@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Every company that does this should be Boycotted. Creators and writers need to create their own organizations, unionize or even help one another into employment. AI could have been used for useful things but corporations can never be trusted to ever be ethical.