• 1 Post
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle





  • What the heck did you just say about storage, you little newbie? I’ll have you know I graduated top of my class in Computer Engineering, and I’ve been involved in numerous secret raids on terrible cable management, and I have over 300 confirmed SSD installs. You’re complaining about space on your PC like it’s some sort of divine mystery? Listen up, sailor.

    You’re whining about dropping $120 on BG3 and Starfield? You could get a 1TB SSD for as low as 35 bucks, you scallywag. Don’t even get me started on HDDs; a 1TB one is practically a steal at 22 dollars. And let’s go big or go home: 2TB HDD for 40-65 dollars, or if you’re feeling ritzy, a 2TB SSD at 60-90. Still less than your precious games, maggot.

    You’re out of SATA ports? Son, have you heard of a PCIe SATA card? Load that baby up. You’ve got more slots on your motherboard than you have excuses. Talking about running out of space with a setup that should give you 2-4TB at least? Don’t make me laugh. You’re telling me you can’t find space for your precious BG3? That’s only 150GB, sailor, uninstall it if you’re so keen on playing Starfield.

    And if you’ve hit the limits of both onboard SATA and PCIe, then I have one word for you: USB 3. Worst case, you get an external drive and run Starfield from there. Don’t act like your OS drive is the final frontier; there are many ways to expand your digital seas, you landlubber.

    So before you cry about storage again, maybe do some basic math and stop acting like you’re navigating uncharted waters. Get another drive, or walk the plank.



  • Contrary to some misconceptions, these SIMD capabilities did not amount to the processor being “128-bit”, as neither the memory addresses nor the integers themselves were 128-bit, only the shared SIMD/integer registers. For comparison, 128-bit wide registers and SIMD instructions had been present in the 32-bit x86 architecture since 1999, with the introduction of SSE. However the internal data paths were 128bit wide, and its processors were capable of operating on 4x32bit quantities in parallel in single registers.

    Source


  • GPT-4 writes better code than the junior developers on my team. I wish they would use it as a rubber duck, at the least.

    It often requires iteration and asking for certain things (logging, error handling, simplification/maintainability, etc.) but it gets there. I think it would eventually be possible to get AI at a place where it thinks about these things automatically instead of requiring prodding.

    Still doesn’t replace people, but makes them more effective.


  • FWIW, the AI features are not used to provide search results; they are all on-demand and triggered by the user (via Quick Answer, or Universal Summarizer, or the “discuss this site” feature).

    The founder is well aware of the problems with AI and that is taken into account when deciding how to use it in Kagi.

    See this link: https://blog.kagi.com/kagi-ai-search#philosophy

    Generative AI is a hot topic, but the technology still has flaws. Critics of AI warn that “[AI] will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge”.

    From an information retrieval point of view, relevant to our context of a search engine, we should acknowledge the two main limitations of the current generation of AI.

    Large language models (LLMs) should not be blindly trusted to provide factual information accurately. They have a significant risk of generating incorrect information or fabricating details (confabulating). This can easily mislead people who are not approaching LLMs pragmatically. (This is a product of auto-regressive nature of these models where the output is predicted one token at a time, and once it strays away from the “correct” path, for which the probablity grows exponentially with the length of the output, it is “doomed” to the end of output, without the ability to plan ahead or correct itself).

    LLMs are not intelligent in the human sense. They have no understanding of the actual physical world. They do not have their own genuine opinions, emotions, or sense of self. We must avoid attributing human-like qualities to these systems or thinking of them as having human-level abilities. They are limited AI technologies. (In a way, they are similar to how a wheel can get us from point A to point B, sometimes much more efficiently than human body can, but it lacks the ability to plan and the agility of human body to get us everywhere a human body can)

    These limitations required us to pause and reflect on the impact on search experience, before incoporating this new technology for our customers. As a result, we came up with an AI integration philosophy that is guided by these principles:

    AI should be used in closed, defined context relevant to search (don’t make a therapist inside the search engine, for example) AI should be used to enhance the search experience, not to create it or replace it (similar to how we use JavaScript in Kagi, where search still works perfectly fine when JS is disabled in the browser) AI should be used to the extent that it enhances our humanity, not diminish it (AI should be used to support users, not replace them)


  • $25 is a lot per month but it is saving me a lot of time and helping me to find better results so I find it worth it.

    I justify the cost by relating it to how it helps me at work. I believe Kagi makes me more effective; my boss(es… :( ) and peers notice, and that translates to better performance evaluations and raises. I don’t hide my usage of it from my team, but I don’t think they realize how much of an advantage it gives me. Once you get the rankings and lenses tuned to your workflow, it’s amazing how it lets you cut through the nonsense of the internet.


  • I’ve been using it for about a year and a half, on the unlimited plan. I pay for the year up front for the discount. There’s no way I’m willingly going to stop using Kagi. I’m a developer and perform about 2500 searches a month.

    The ability to adjust the ranking of domains and the lenses save me a ton of time. No other engine comes close to the productivity.

    You can easily talk to the developers and founder, too. I’ve had many of my suggestions actually implemented. It’s great when you pay for the service and they are in it for you, not your data.





  • I use Hurl. Everything is just a text file:

    POST https://example.org/api/tests
    {
        "id": "4568",
        "evaluate": true
    }
    
    HTTP 200
    [Asserts]
    header "X-Frame-Options" == "SAMEORIGIN"
    jsonpath "$.status" == "RUNNING"    # Check the status code
    jsonpath "$.tests" count == 25      # Check the number of items
    jsonpath "$.id" matches /\d{4}/     # Check the format of the id