I have been teaching myself Linux on really old hardware. I am looking into building a new system so I can learn SDXL and maybe mess around a little with LLMs.

I have been reading as much as I can, but get a lot conflicting info. Ideally I would like a build that I can get started with, without being at bare minimums if possible. Just best value at a realistic starting point. Willing to save up more if it will save me from waiting forever while my PC is maxed out. With options to expand easily as I go. Don’t mind using used hardware. I have also read some about cheap enterprise hardware being an option that can expand easily?

Any help would be awesome. Thank you in advance.

P.S. Happy New Year! Wishing everyone all the best. After the past few years, we could all use a better one.

  • MightEnlightenYou@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    10 months ago

    I run a lot of LLMs locally, as well as doing image generation locally with Stable Diffusion.

    The most important factor is the GPU. If you’re gonna do AI stuff with your GPU it basically has to be a CUDA GPU. You’ll get the most bang for the buck with a 3090 TI, (amount of VRAM is also important). And get at least 64 GB of RAM.

    If you get this you’ll be set for a year until you learn enough to want better hardware.

    A lot of people try to buy their way out of a lack of knowledge and skill about these things, don’t do that. I’m able to get better results with 7B models than many get with 70B models.

    Get LM Studio for the LLMs and get A1111 (or ComfyUI or Foooocus) for image generation.

    • dm_me_your_boobs@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      How is comfyUI these days? I use a similar node based setup for my home automation and really liked the idea of using it for image gen. But, also, I kinda wanna just type and go for image gen, so StableDiffusionWebUI has been my go-to.

      • MightEnlightenYou@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        10 months ago

        I’d say that ComfyUI is superior in most ways (including speed and features), but I know A1111 much better than ComfyUI so I just use ComfyUI when it can do a thing that A1111 can’t

  • Starbuck@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    I have an old jetson nano that’s pretty neat for getting into ML. It’s basically a raspberry pi with a GPU strapped to it. I’ve had it for a few years, so you could probably get one cheap.

    Any bigger than that and I would say just look into paying for Google Colab. https://colab.google

    You aren’t going to want to buy dedicated resources for local training just yet. Learn the skills to interact with big hardware today, no need to wait. Only buy when you know what you need.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    For LLMs the bigger models are super important. I got a i7 twelfth gen with a 16GBV 3080Ti in a laptop. That is 20 logical cores and DDR5 along with the largest GPU option that was available a few months ago… short of spending $4k. I upgraded my ram to the max of 64GB within a week. I wish I had picked a laptop that could address 96+ GB of system memory. The laptop form factor sucks. The fans blow all the time and the battery life with this monster GPU is less than 1 hour if it is running at all. The power supply also doubles as a hotplate.

    Most AI stuff work over your network in a web browser or on local host on your machine. Towers are better. If you are training a LoRA you will absolutely cook a GPU where it thermal throttles. I put my laptop in front of a window AC unit blowing at max cold and it barely stays below 90°C. Towers and cooling are important, as are number of available logical cores and RAM. You want absolute max GPU you can afford.

    If I could do this again, I would look into a real workstation with 256GB+ of system memory, support for enterprise CPUs that support as current of AVX512 assembly instructions as possible (supported feature in Llama2 model loader), and I would get a 24 GB GPU.

    As far as I know the largest open source model right now is a 180B model. Every token is 2 bytes. So you would need ~ 360 GB of memory to make that work. Do you need this, maybe not, but I would LOVE to be able to try that model. After running a 70B and finding a few of them that I like, it is all I run. There is no comparison in the output quality between even a 33B and a 70B. Bigger is much better. All the smaller stuff needs training and tweaking to make it work well. Don’t trust benchmarks or basic reviews on YT. Ask someone that is actually using models in practice.