• 11 Posts
  • 1.05K Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle

  • I had to look it up. The full context is:

    So the new communications strategy for Democrats, now that their polling advantage is collapsing in every single state… collapsing in Ohio. It’s collapsing even in Arizona. It is now a race where Blake Masters is in striking distance. Kari Lake is doing very, very well. The new communications strategy is not to do what Bill Clinton used to do, where he would say, “I feel your pain.” Instead, it is to say, “You’re actually not in pain.” So let’s just, little, very short clip. Bill Clinton in the 1990s. It was all about empathy and sympathy. I can’t stand the word empathy, actually. I think empathy is a made-up, new age term that — it does a lot of damage. But, it is very effective when it comes to politics. Sympathy, I prefer more than empathy. That’s a separate topic for a different time.

    Later on Twitter:

    The same people who lecture you about ‘empathy’ have none for the soldiers discharged for the jab, the children mutilated by Big Medicine, or the lives devastated by fentanyl pouring over the border. Spare me your fake outrage, your fake science, and your fake moral superiority.

    https://www.snopes.com/fact-check/charlie-kirk-empathy-quote/

    It’s not as bad as the out-of-context quote, but I’m having trouble even wrapping my head around it. I guess the argument is something like:

    How can you claim to have empathy when you actively ignore or dismiss the pain of these specific groups? Your empathy is not real; it’s a political weapon. Fake outrage, fake science, and fake moral superiority used to win arguments and elections.

    He’s not wrong about (many) Democrats. But even setting vaccine denialism aside, the core of favoring ‘sympathy over empathy’ is kind of unavoidable. It feels like tankie whataboutism: ‘Democrat’s empathy is fake, therefore, more distanced sympathy is our justified approach’


  • And any divergence from that is “ruining games” or “being woke” to the point that we don’t even GET those games outside of the rare case of a game nobody cared about becoming popular

    I would argue the origin is sales. E.G. the publisher wants the sex appeal to sell, so that’s what they put in the game. Early ‘bro’ devs may be a part of this, but the directive from up top is the crux of it.

    And that got so normalized, it became what gamers expect. And now they whine like toddlers when anyone tries to change it, but that just happens to be an existing problem conservative movements jumped on after the fact.


    TL;DR the root cause is billionares.

    Like aways.


  • Yeah. But it also messes stuff up from the llama.cpp baseline, and hides or doesn’t support some features/optimizations, and definitely doesn’t support the more efficient iq_k quants of ik_llama.cpp and its specialzied MoE offloading.

    And that’s not even getting into the various controversies around ollama (like broken GGUFs or indications they’re going closed source in some form).

    …It just depends on how much performance you want to squeeze out, and how much time you want to spend on the endeavor. Small LLMs are kinda marginal though, so IMO its important if you really want to try; otherwise one is probably better off spending a few bucks on an API that doesn’t log requests.






  • At risk of getting more technical, ik_llama.cpp has a good built in webui:

    https://github.com/ikawrakow/ik_llama.cpp/

    Getting more technical, its also way better than ollama. You can run models way smarter than ollama can on the same hardware.

    For reference, I’m running GLM-4 (667 GB of raw weights) on a single RTX 3090/Ryzen gaming rig, at reading speed, with pretty low quantization distortion.

    And if you want a ‘look this up on the internet for me’ assistant (which you need for them to be truly useful), you need another docker project as well.

    …That’s just how LLM self hosting is now. It’s simply too hardware intense and ad hoc to be easy and smart and cheap. You can indeed host a small ‘default’ LLM without much tinkering, but its going to be pretty dumb, and pretty slow on ollama defaults.




  • https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Perennial_sources

    Newsweek (2013–present): Unlike articles before 2013, Newsweek articles since 2013 are not generally reliable. From 2013 to 2018, Newsweek was owned and operated by IBT Media, the parent company of International Business Times. IBT Media introduced a number of bad practices to the once reputable magazine and mainly focused on clickbait headlines over quality journalism. Its current relationship with IBT Media is unclear, and Newsweek’s quality has not returned to its status prior to the 2013 purchase. Many editors have noted that there are several exceptions to this standard, so consensus is to evaluate Newsweek content on a case-by-case basis. In addition, as of April 2024, Newsweek has disclosed that they make use of AI assistance to write articles. See also: Newsweek (pre-2013).

    It’s in the warning category of ‘no consensus’ per Wikipedia’s source standards.

    I point this out all the time in /c/politics and /c/news, but the mods (AFAIK) have never responded to my suggestions of source guidelines (such as generally following Wikipedia’s in the link above).









  • relatively minor efficiencies don’t make up for the compile times lol.

    It’s sometimes even a regression. For instance, self-compiled pytorch is way slower than the official releases, and Firefox generally is too unless you are extremely careful about it. Stuff like Python doesn’t get a benefit without patches.

    I think the point of Gentoo is supposed to be ‘truly from source’ and utility for embedded stuff, not benchmark performance. Especially since there are distros that offer ‘march’ optimized packages now.