• 10 Posts
  • 852 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle




  • Machine learning has been a field for years, as others said, yeah, but Wikipedia would be a better expansion of the topic. In a nutshell, it’s largely about predicting outputs based on trained input examples.

    It doesn’t have to be text. For example, astronmers use it to find certain kinds of objects in raw data feeds. Object recognition (identifying things in pictures with little bounding boxes) is an old art at this point. Series prediction models are a thing, languagetool uses a tiny model to detect commonly confused words for grammar checking. And yes, image hashing is another, though not entirely machine learning based. IDK what Tineye does in their backend, but there are some more “oldschool” approaches using more traditional programming techniques, generating signatures for images that can be easily compared in a huge database.

    You’ve probably run ML models in photo editors, your TV, your phone (voice recognition), desktop video players or something else without even knowing it. They’re tools.

    Seperately, image similarity metrics (like lpips or SSIM) that measure the difference between two images as a number (where, say, 1 would be a perfect match and 0 totally unrelated) are common components in machine learning pipelines. These are not usually machine learning based, barring a few execptions like VMAF (which Netflix developed for video).

    Text embedding models do the same with text. They are ML models.

    LLMs (aka models designed to predict the next ‘word’ in a block of text, one at a time, as we know them) in particular have an interesting history, going back to (If I even remember the name correctly) BERT in Google’s labs. There were also tiny LLMS people did run on personal GPUs before ChatGPT was ever a thing, like the infamous Pygmalion 6B roleplaying bot, a finetune of GPT-J 6B. They were primitive and dumb, but it felt like witchcraft back then (before AI Bros marketers poisoned the well).







  • Not everyone’s a big kb/mouse fan. My sister refuses to use one on the HTPC.

    Hence I think that was its non-insignificant niche; couch usage. Portable keyboards are really awkward and clunky on laps, and the steam controller is way better and more ergonomic than an integrated trackpad.

    Personally I think it was a smart business decision, because of this:

    It doesnt have 2 joysticks so I just buy an Xbox one instead.

    No one’s going to buy a steam-branded Xbox controller, but making it different does. And I think what killed it is that it wasn’t plug-and-play enough, eg it didn’t work out of the box with many games.




  • Traning data is curated and continous.

    In other words, one (for example, Musk) can finetune the big language model on a small pattern of data (for example, antisemetic content) to ‘steer’ the LLM’s outputs towards that.

    You could bias it towards fluffy bunny discussions, then turn around and send it the other direction.

    Each round of finetuning does “lobotomize” the model to some extent though, making it forget stuff, overuses common phrases, reducing its ability to generalize, ‘erasing’ careful anti-reptition tuning and stuff like that. In other words, if Elon is telling his engineers “I don’t like these responses. Make the AI less woke, right now,” he’s basically sabotaging their work. They’d have to start over with the pretrain and sprinkle that data into months(?) of retraining to keep it from dumbing down or going off the rails.

    There are ways around this outlined in research papers (and some open source projects), but Big Tech is kinda dumb and ‘lazy’ since they’re so flush with cash, so they don’t use them. Shrug.




  • There is a nugget of ‘truth’ here:

    https://csl.noaa.gov/news/2023/390_1107.html

    I can’t find my good source on tis, but there are very real proposals to seed the arctic or antarctic with aerosols to stem a runaway greenhouse gas effect.

    It’s horrific. It would basically rain down sulfiric acid onto the terrain; even worse than it sounds. But it would only cost billions, not trillions of other geoengineering schemes I’ve scene.

    …And the worst part is it’s arctic/climate researchers proposing this. They intimately know exactly how awful it would be, which shows how desperate they are to even publish such a thing.

    But I can totally understand how a layman (maybe vaguley familiar with chemtrail conspiracies) would come across this and be appalled, and how conservative influencers pounce on it cause they can’t help themselves.

    Thanks to people like MTG, geoengineering efforts will never even be considered. :(


    TL;DR Scientists really are proposing truly horrific geoengineering schemes “injecting chemicals into the atmosphere” out of airplanes. But it’s because of how desperate they are to head off something apocalyptic, and it’s not even close to being implemented. They’re just theories and plans.